Announcement

Collapse
No announcement yet.

AMD Quietly Funded A Drop-In CUDA Implementation Built On ROCm: It's Now Open-Source

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Panix View Post
    "I also already explained to you HIP-RT already works on Windows" - Sure, it does. It's just going to be labeled 'experimental' forever, right? That's why it's still not included in the open data results. Also, there's almost no site that discusses 'how well it's working' - I guess it's an enigma.

    You're just a pathetic troll. I MADE the original point that Zluda was included in the Opendata Blender sample results, idiot. LOL! THAT WAS THE POINT.

    Zluda, a project implemented by a former Intel employee that AMD brought in was able to implement a CUDA hack - and AMD has been fumbling HIP-RT for the past several Blender versions STILL is unstable or doesn't work properly for it to be released OFFICIALLY - or it would ALSO be part of the Blender results.

    I cited posts in the Blender Developer forums - that ppl are still having issues with HIP-RT while Optix is a tool that eclipses performance from CUDA so while ZLUDA is nice and interesting - Optix has much better performance. It's just another example how AMD drops the ball.

    Yeah, there's fewer CUDA samples for the 3090 now - and the few registered on the site - the score is less than the HIP score for the 7900 XTX. Okay? How much is a used 3090 vs the retail 7900 XTX, huh? You have no straws so you're making stupid points. I already knew all that which is why I was considering a used 7900 XTX in my country so you're not telling me anything I already knew! The problem though, is that you just need to enable Optix - and you have much better performance from an old gpu and cheaper card.

    The 4080 still destroys all the AMD gpus - including the 7900 xtx - as it also can use Optix and in CUDA - its performance is better, also. Ditto the 4080 Super which is close in price to the 7900 xtx - either AMD has to lower the price of these cards or get HIP-RT working (competitively) because they just look bad in comparison.

    The Blender forums you're talking about and in the dev forums - many amd gpu owners are mostly complaining about crashes, stutters and performance issues - so, it's ironic that you want me to go 'read there.' LOL! You're such a clown!

    But, nah, you're either a troll or AMD shill or both.
    I think the point is actually that the quality of Optix rendering is very subpar. Yes it seems faster but actually it isn't doing what other codepaths are doing.

    You can ignore the fact that Renders with Optix look like ass all you want.

    You do -NOT- get better performance, actually what you get is much worse quality.

    EDIT: And that has pretty much always been nVidia's goto modus operandi. On pretty much every generation of every card in every product line on every API. nVidia hardware produces worse quality output. It -ALWAYS- has. Optix takes that their modus operandi to an extreme..... Again...

    EDIT: When nVidia can't win they cheat...
    Last edited by duby229; 18 February 2024, 10:25 AM.

    Comment


    • Originally posted by duby229 View Post
      I think the point is actually that the quality of Optix rendering is very subpar. Yes it seems faster but actually it isn't doing what other codepaths are doing.

      You can ignore the fact that Renders with Optix look like ass all you want.

      You do -NOT- get better performance, actually what you get is much worse quality.

      EDIT: And that has pretty much always been nVidia's goto modus operandi. On pretty much every generation of every card in every product line on every API. nVidia hardware produces worse quality output. It -ALWAYS- has. Optix takes that their modus operandi to an extreme..... Again...

      EDIT: When nVidia can't win they cheat...
      Oh great, another fanboy. I know that there were complaints early on - about optix - but, most ppl involved in Blender recommend Nvidia cards - particularly for the optix tools.

      Blender is capable of using GPU rendering to speed up rendering process. Two popular options for GPU rendering in Blender are CUDA & OptiX


      iRender is a powerful solution for Blender rendering. In this blog, iRender will compare CUDA vs OptiX Render in Blender Cycles.


      Discover how Nvidia's Optix Denoiser integration in Blender can reduce render times by up to 95% while maintaining high-quality results. Learn about its benefits, limitations, and alternative solutions for animation. Don't miss this denoiser showdown!


      The only thing I found - is problems when using Animation. So, if you have something more that illustrates all these ppl are wrong or should find fault in Optix use, go ahead and explain. I think you're just joining the list of amd fanboys here - if AMD had a strategy in caring about productivity software instead of gaming, they'd have more users.

      Comment


      • Originally posted by Panix View Post
        Oh great, another fanboy. I know that there were complaints early on - about optix - but, most ppl involved in Blender recommend Nvidia cards - particularly for the optix tools.

        Blender is capable of using GPU rendering to speed up rendering process. Two popular options for GPU rendering in Blender are CUDA & OptiX


        iRender is a powerful solution for Blender rendering. In this blog, iRender will compare CUDA vs OptiX Render in Blender Cycles.


        Discover how Nvidia's Optix Denoiser integration in Blender can reduce render times by up to 95% while maintaining high-quality results. Learn about its benefits, limitations, and alternative solutions for animation. Don't miss this denoiser showdown!


        The only thing I found - is problems when using Animation. So, if you have something more that illustrates all these ppl are wrong or should find fault in Optix use, go ahead and explain. I think you're just joining the list of amd fanboys here - if AMD had a strategy in caring about productivity software instead of gaming, they'd have more users.
        None of that changes the fact that Optix produces much worse quality output. If your point was that nVidia pays shills, then yes I agree with you. But, that fact doesn't change the fact that Optix produces shit quality output.

        Comment


        • Originally posted by duby229 View Post

          None of that changes the fact that Optix produces much worse quality output. If your point was that nVidia pays shills, then yes I agree with you. But, that fact doesn't change the fact that Optix produces shit quality output.
          According to you or is that the general consensus? Want to show me enough evidence that this is the general opinion/perspective of Blender users? Or it's just your opinion? It would be intriguing if that is what most ppl think - other than AMD fanboys - because that would be a new discovery for me. I would also be inclined to re-think my assessment and leaning towards gpus - and make the AMD gpu a much more acceptable buy/acquisition except I suspect you're just full of **** or greatly exaggerating the negative impression - if any.

          Comment


          • Originally posted by Panix View Post
            "I also already explained to you HIP-RT already works on Windows" - Sure, it does. It's just going to be labeled 'experimental' forever, right? That's why it's still not included in the open data results. Also, there's almost no site that discusses 'how well it's working' - I guess it's an enigma.

            Edit: Puget briefly mentions it when Blender was at 3.6 - it didn't sound exciting - since, then, there's been discussion of it 'regressing' since Blender 4.0. There's two camps (I would say) - ppl who say it's a bit faster than HIP and others who claim that it doesn't work at all or is unstable/crashing. Either way, this is something AMD (w/ some assistance by Blender themselves?) has been working on for a while and not progressing. It's very disappointing and I just use it as an example of AMD either lacking support, lacking attention to it or outright being unable to deliver the tech/feature, period.

            You're just a pathetic troll. I MADE the original point that Zluda was included in the Opendata Blender sample results, idiot. LOL! THAT WAS THE POINT.
            Lmao 🤣
            THe troll with 0 actual links calling the other person with data a troll, no data, just trust me bro.
            GTFO here I am done, you're a broken record and everything you say can be debunked with what I have already written. Also that is not what you said, you said Opendata support zluda, it doesnt work like that, it supports CUDA and zluda allows the amd cards to use that.

            You still dont adres NVidias weaker viewport performance, guess Nvidia is allowed to perform less. Nor do you ever criticize nvidia's poor Vram offering.Guess nvidia is allowed to sell actually inferior hardware (to not even get into the buss speed that affects compute performance like rendering and ai).

            Just shut up, you have 0 experience on the subjects you talk about, plain laughable.
            Keep talking nonsense on the forums, every village needs an idiot.

            Edit:
            I just checked on the Blender issue tracker to see if there are any Optix issues.
            There is a pretty big one (#118020, severity is high) with Optix crashing when enabling OSL, by Panix his absolute joke of a logic we must assume this never gets patched and Optix is unusable now. Crazy what you find when you explicitly search for a shortcoming and make up a reasoning why it is such a big deal without knowing or explaining that it very wel might not affect your use case.
            What is OSL? Panix won't know but he sure will have an oppinion on it.
            Last edited by tenchrio; 18 February 2024, 03:56 PM.

            Comment


            • Originally posted by Panix View Post
              According to you or is that the general consensus? Want to show me enough evidence that this is the general opinion/perspective of Blender users? Or it's just your opinion? It would be intriguing if that is what most ppl think - other than AMD fanboys - because that would be a new discovery for me. I would also be inclined to re-think my assessment and leaning towards gpus - and make the AMD gpu a much more acceptable buy/acquisition except I suspect you're just full of **** or greatly exaggerating the negative impression - if any.
              it is much more simple than that and you never tell us if you panix even personally like the OptiX output.

              because if you tell us this secret the result is very simple:

              if you like the OptiX output then buy your 3090

              if you don't like the optiX output then buy a 7900XTX.

              but if you are a professional and you do real complex and big projects you will buy a AMD PRO w7900 anyway because you need 48GB vram...

              if you are a professional who has a Donkey who shits gold nuggets and you do not have to care about money at all you can buy a 48GB vram nvidia card.

              Phantom circuit Sequence Reducer Dyslexia

              Comment


              • Originally posted by Panix View Post
                According to you or is that the general consensus? Want to show me enough evidence that this is the general opinion/perspective of Blender users? Or it's just your opinion? It would be intriguing if that is what most ppl think - other than AMD fanboys - because that would be a new discovery for me. I would also be inclined to re-think my assessment and leaning towards gpus - and make the AMD gpu a much more acceptable buy/acquisition except I suspect you're just full of **** or greatly exaggerating the negative impression - if any.
                So I do have to admit to you that I don't even have a modern nVidia card to experiment and play with rn... So I'll just give that much up, I don't have any way of actually testing optix out for myself on my own hardware.

                It's not a money issue, its a desire issue for me.

                And yes the general consensus seems to be that optix produces much worse quality output. If quality isn't a concern but render times are then optix does seem to have some amount of value in that regard.

                Comment


                • Originally posted by qarium View Post

                  it is much more simple than that and you never tell us if you panix even personally like the OptiX output.

                  because if you tell us this secret the result is very simple:

                  if you like the OptiX output then buy your 3090

                  if you don't like the optiX output then buy a 7900XTX.

                  but if you are a professional and you do real complex and big projects you will buy a AMD PRO w7900 anyway because you need 48GB vram...

                  if you are a professional who has a Donkey who shits gold nuggets and you do not have to care about money at all you can buy a 48GB vram nvidia card.
                  LMAO. Yes, that's why Puget Systems who actually configures systems for Blender (being one program among others) only includes Nvidia configurations. You're a funny guy, Q - almost as amusing as your troll buddy, tenchrio.

                  Comment


                  • Originally posted by tenchrio View Post
                    Lmao 🤣
                    THe troll with 0 actual links calling the other person with data a troll, no data, just trust me bro.
                    GTFO here I am done, you're a broken record and everything you say can be debunked with what I have already written. Also that is not what you said, you said Opendata support zluda, it doesnt work like that, it supports CUDA and zluda allows the amd cards to use that.

                    You still dont adres NVidias weaker viewport performance, guess Nvidia is allowed to perform less. Nor do you ever criticize nvidia's poor Vram offering.Guess nvidia is allowed to sell actually inferior hardware (to not even get into the buss speed that affects compute performance like rendering and ai).

                    Just shut up, you have 0 experience on the subjects you talk about, plain laughable.
                    Keep talking nonsense on the forums, every village needs an idiot.

                    Edit:
                    I just checked on the Blender issue tracker to see if there are any Optix issues.
                    There is a pretty big one (#118020, severity is high) with Optix crashing when enabling OSL, by Panix his absolute joke of a logic we must assume this never gets patched and Optix is unusable now. Crazy what you find when you explicitly search for a shortcoming and make up a reasoning why it is such a big deal without knowing or explaining that it very wel might not affect your use case.
                    What is OSL? Panix won't know but he sure will have an oppinion on it.
                    Liar. I posted several links and cited various websites on here. Of course, you post none here. Also, you keep going on tangents with stuff I already conceded - like Nvidia's lack of vram on many gpus - I mentioned this and chastised Nvidia for their bs - I am not a Nvidia fan - I know their business policies and practices are bad - I am looking mostly at used gpus and made it clear that I am mostly discussing the tech. and comparing. I also made it clear that it's preferable to get as much vram as possible - I sold a 3080 10gb gpu - since I wanted more vram - just in case. I don't upgrade often so I am taking my time to decide. I also wanted to adhere to open source if possible so I do want AMD to have good performance - they just don't in the areas I want to use the gpu - I think gaming is a toss up - and a coin flip - I don't care about the upscaling/ray tracing in games - whether dlss is better than fsr - some nvidia and amd ppl who mostly concentrate on gaming will argue there.

                    My debate topic I am concerned with is here w/ rendering/gpgpu compute/video editing - maybe AI/ML - but, you couldn't cite anything with Optix except one issue. It's severe - okay, noted. But, HIP/HIP-RT has a gazillion. LOL!

                    That Optix issue doesn't stop ppl from prefering/picking nvidia gpus over AMD's for Blender. You must have done different research in another universe.

                    Comment


                    • Originally posted by duby229 View Post

                      So I do have to admit to you that I don't even have a modern nVidia card to experiment and play with rn... So I'll just give that much up, I don't have any way of actually testing optix out for myself on my own hardware.

                      It's not a money issue, its a desire issue for me.

                      And yes the general consensus seems to be that optix produces much worse quality output. If quality isn't a concern but render times are then optix does seem to have some amount of value in that regard.
                      It's not so much optix as it is just plain Cycles, for architectural and realistic rendering you would use Luxcore, Octanerender ($700/year) or V-ray ($53/mo, and they beat Cycles in realism, Cycles still relies on ambient occlusion while I am pretty sure all 3 of the before mentioned use global illumination).

                      For non-photorealistic renders you would use Eevee, Beer, Pixar's RenderMan ($600 dollars for the license) or for the really daring Workbench (there is always that 1 guy).
                      There exist "it is an option" like AMD's Prorender which I would now ironically recommend for Intel cards as it uses Vulkan (and Eevee-next seems to be delayed till Blender 4.2), but I can't say I cared for it, I tested it once and it was faster than Cycles in the same scene (at least on an Nvidia card) but I focused on Geometry nodes instead.
                      There is also the Nvidia Omniverse, but that is a weird one, as far as I know it is a USD and render engine in one (I guess the AMD USD Hydra addon would fit that same description as with Blender 4.0 you can only use Prorender through this addon, unlike before were it was its own plugin, haven't used either, been meaning to take a course on the USD workflow to improve my work) but I don't really know a single person that uses it despite having an overwhelming majority of Nvidia users in my friend group.

                      The majority of those don't use Optix and rely mostly on Cuda (of the top of my head I think only Omniverse and Octane has fully fledged Optix support, some even operate through Vulkan or OpenGL). I'm not saying no projects are ever done with Cycles but to say it is the be all and end all of all Blender performance is plain wrong. It already irks me that all we see on CPU benchmarks is just two of the oldest render files when things like fluid and smoke simulation rely on Mantaflow which lacks GPU acceleration.
                      And some Blender users don't care about render engines at all, ask the VRChat and VRM modelers (I've done both and with both cases I never rendered just setup materials in material preview mode and checked if it looks good in Unity).

                      Comment

                      Working...
                      X