Announcement

Collapse
No announcement yet.

More Development Activity Ticking Up Around Vulkan For Blender

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by qarium View Post

    its the law what is the problem AMD could easily make a CUDA compatibility on the binary level but this is agaist the law.

    ROCm/HIP is the next best legal think make it compatible on the source-code level.

    on Blender OpenCL is death... and maybe Vulkan will not make it as soon as they Discover that NextGL based on Zinc and WebGPU is higher level and fits better to them because vulkan is to low-level...
    its not about having a cuda alternative, the issue is that AMD already has a viable compute platform that works well on their cards called vulkan. and instead of investing into the vulkan ecosystem they make their own buggy ecosystem that hardly works. vulkan isn't a silver bullet, but between OGL, OCL, and VK, a lot of compute need is covered, AMD has once again reinvented the wheel, shooting themselves in the foot in the process.

    rocm isnt even officially supported on polaris anymore, despite it being still a very popular arch. rocm is quite literally a joke in many compute communities. because of either typical AMD bugs or how much of a pain it is for users to work with

    Comment


    • #22
      Originally posted by Quackdoc View Post
      its not about having a cuda alternative, the issue is that AMD already has a viable compute platform that works well on their cards called vulkan. and instead of investing into the vulkan ecosystem they make their own buggy ecosystem that hardly works. vulkan isn't a silver bullet, but between OGL, OCL, and VK, a lot of compute need is covered, AMD has once again reinvented the wheel, shooting themselves in the foot in the process.
      rocm isnt even officially supported on polaris anymore, despite it being still a very popular arch. rocm is quite literally a joke in many compute communities. because of either typical AMD bugs or how much of a pain it is for users to work with
      right now the cuda alternative in the meaning of blender fail in the market because optiX is what the people use.
      only some rare people only use CPU/CUDA/HIP because of the matematical idendical results...
      i think amd soon will come up with an OptiX alternative.

      i know you hate the polaris part but face reality for companies like amd or nvidia 5 year old hardware is nothing what they care of.

      for polaris people can still develop code for themself because of opensource driver.

      some people say vulkan is to low-level and even WebGPU is higher-level in comparison.

      also it looks like vulkan has some precision problems to fit the compute role...

      if you ask me i think openGL and OpenCL is death but i see some future for NextGL based on Zinc could result in a nice option.

      i am also in favor of vulkan for compute but it looks like WebGPU will soon be in the pole position for that.

      Nvidia still has the cuda monopole and right now ROCm/HIP is the only solution for this.

      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • #23
        Originally posted by qarium View Post

        right now the cuda alternative in the meaning of blender fail in the market because optiX is what the people use.
        only some rare people only use CPU/CUDA/HIP because of the matematical idendical results...
        i think amd soon will come up with an OptiX alternative.

        i know you hate the polaris part but face reality for companies like amd or nvidia 5 year old hardware is nothing what they care of.

        for polaris people can still develop code for themself because of opensource driver.

        some people say vulkan is to low-level and even WebGPU is higher-level in comparison.

        also it looks like vulkan has some precision problems to fit the compute role...

        if you ask me i think openGL and OpenCL is death but i see some future for NextGL based on Zinc could result in a nice option.

        i am also in favor of vulkan for compute but it looks like WebGPU will soon be in the pole position for that.

        Nvidia still has the cuda monopole and right now ROCm/HIP is the only solution for this.
        my 1050ti still works fine and is being actively developed for, is getting new features etc, compute improvements which is older then even 580, my I3 6100 is the same too, its not falling behind compared to its new peers nearly as hard as Polaris is falling behind.

        havent seen anyone complain about vulkan precision issues. Neither OCL or OGL are dying anytime soon. I dont see webgpu as a viable alternative to vulkan support.

        Comment


        • #24
          Originally posted by Quackdoc View Post
          my 1050ti still works fine and is being actively developed for, is getting new features etc, compute improvements which is older then even 580, my I3 6100 is the same too, its not falling behind compared to its new peers nearly as hard as Polaris is falling behind.
          havent seen anyone complain about vulkan precision issues. Neither OCL or OGL are dying anytime soon. I dont see webgpu as a viable alternative to vulkan support.
          i am not in the position to fix your polaris problem. but GPU prices are low you can get a vega64 for less than 200€ on ebay.

          "havent seen anyone complain about vulkan precision issues."

          then you have not read as much forum posts as i do the precision garantee of vulkan is more like OpenGL means its allowed to lower the precision. its not like CUDA or OpenCL with full garantee of the precision.

          thats the main reason why you don't see vulkan in bigger compute projects.

          but maybe vulkan extensions will solve this problems in the future.

          "Neither OCL or OGL are dying anytime soon"

          no future proof project use OpenCL and OpenGL... even Blender dropped the OpenCL support.

          "I dont see webgpu as a viable alternative to vulkan support."

          ,,, companies like apple believe in it. and i also think that this is the future.
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • #25
            Originally posted by stormcrow View Post
            Blender targets where its users are and the APIs that let them do so regardless of the licensing. I'm sorry but the AMD/Intel GPGPU isn't where the users are and for damned good reason in this case. Its users are mostly using Nvidia hardware which uses CUDA and the reason that's the case is because CUDA got their first, works well, easy to install, maintain and write for
            As a 3D Artist myself, who was previously on AMD and is now on NVIDIA, this is very true. With my previous AMD card (RX 570) almost nobody chose to support it, not even AMD or Blender did a good job in supporting it and this has been true every since I gave AMD a shot in 2015 with an R9 270X: while, initially, support was there and the card worked "OK", after a few updates (from either Blender or GPU drivers) I just couldn't render anything with my GPU. After some weeks and a new Blender or GPU driver update, things started working again, but then it would just fail again after a while
            .
            AMD users told me to "hang in there". From 2015 through 2020, AMD support in professional 3D programs was laughable, and the answer was always "buy the next generation" despite the problems never ending and the new generations failing as well. In 2020, either AMD or Blender devs introduced a bug where, if you tried to render, your entire computer would freeze and you were forced to hard-reboot. This meant that rendering anything with AMD was impossible, regardless of the card's generation. I waited 2 years to see if they'd fix it (because I *really* didn't want to buy NVIDIA), but after waiting for too long AMD gave me the final blow: their new HIP tech was only supported in the latest and greatest cards, i.e. the 6000 series GPU's. That's when I decided to not buy AMD anymore.

            After buying NVIDIA, I hate to say this, but my life as an artist has NEVER been better. Maya's Arnold works, Blender's Cycles is extremely fast with OptiX, Substance Painter works fine, and all my games work well - both on Windows and on Linux.
            Last edited by Amarildo; 13 November 2022, 05:40 AM.

            Comment


            • #26
              Originally posted by qarium View Post

              to me it looks like those people on old 3dstudiomax and Maya will never upgrade and instead migrate to Blender...
              Maya is "industry standard" for a million reasons, and it's truly miles ahead of the competition (with the exception of Houdini, perhaps). I don't see most studios moving to Blender as it's truly a "man-cave tool" compared to Maya, but that is certainly changing and Blender has been evolving a lot these past 4 years, it's truly amazing. I'm definitely rooting for Blender as it's community-focused and it's GPL-software. It's not there yet, it may take a decade or more for it to be where Maya was 15 years ago (which is a good state in terms of features, really), but eventually I see Blender dominating the 3D space as it's a lot more stable/faster than Maya and has the benefits of being GPL (like code improvement, addons released to the community, etc).

              Comment


              • #27
                Originally posted by mirmirmir View Post
                you can use and amd hip just fine. It was different several months ago, but hey.
                The selection of people who can use HIP is (or, at least, was) limited to buyers of the 6000 series. That was a massive blow to AMD users who were on previous generations and had to deal with poor or no support on 3D programs. Not only AMD user suffered a lot up until now (like that bug where the entire computer freezes if you try rendering on Blender/Windows with OpenCL, that was never fixed since 2020), but only those who bought a 6000 series card can/could use HIP - and they have less performance than NVIDIA cards that cost less than half of what AMD is charging.

                HIP is definitely the right move for AMD, but AMD should've made the technology available for as many generations as possible, considering the performance their users get even with the top-tier cards.

                Comment


                • #28
                  Originally posted by qarium View Post

                  sorry i have to tell you that you are wrong... because people if we talk about Blender no longer use CUDA they use OptiX on Nvidia hardware.
                  Sorry, but you're wrong. Blender does support CUDA, it's an option anyone can select e.g. if OptiX isn't working. It's "slow" compared to OptiX, but it's an option. Also, the SheepIt renderfarm uses CUDA and not OptiX.

                  Originally posted by qarium View Post
                  also the professionals in my point of view tent to avoid nvidia gpus because of the lag of the amount of VRAM because if you compare the CUDA result to modern CPUs means 64 core EPIC and ryzen 7000 cpus and the upcoming ZEN4c cpus it looks like modern CPUs are at a similar performance level with the difference of 128-256GB ram is the new normal on even low-end systems.
                  That is also not true. The fastest consumer-grade CPUs will probably never beat the fastest GPUs or even mid-range GPUs. Just so you can see how much of a difference there is: according to TechPowerUp, the newest 7950X renders the BMW scene in 63.7 seconds. Seems great, right? That is, until you compare it to a mid-range GPU like the RTX 3060 that renders the same scene in 13 seconds.

                  Not only that, the same mid-range GPU renders the Classroom scene with about 600 samples/minute, while the 7950X renders it with 140.

                  Originally posted by qarium View Post
                  (i do not talk about optiX here because only CUDA has the mathematical same result than CPU rendering)
                  Also not true. CUDA is massively better than CPU rendering, even if you compare a CUDA-enabled GPU with a CPU that is more expensive. Let's continue our RTX 3060 vs 7950X comparison. For the Classroom benchmark, the 3060 (with CUDA) renders it in about 55 seconds (or around 300 samples/minute), more than double that of the 7950x.


                  Originally posted by qarium View Post
                  so why do professionals avoid Nvidia ?
                  Nobody in the 3D industry avoids NVIDIA. In fact, it's the defacto choice for 3D Artists.


                  Originally posted by qarium View Post
                  thats because even a RTX 4090 has only 24GB VRAM and even if you buy Super expensive professional version it is only 48GB VRAM.... and compared to the CPU solution 128-256GB ram is better.
                  This is a bit of a stretch. No professionals avoid NVIDIA and certainly not because of "lack of VRAM".

                  There is something to be considered, obviously. For instance, if a big CGI studio (like Platige, Poland) uses Maya/Arnold for their rendering, there are certain limitations in Arnold itself that prevent them from using GPU's, like Arnold not being able to render particle streaks or BiFrost volumes with GPU's. This has nothing to do with NVIDIA or their VRAM, but are problems in Arnold that still need addressing. In addition, Arnold can be massively unstable when rendering via GPU's and their GPU render code can actually slow down the overall render process if you compare to a CPU render, because sometimes, somehow, the GPU renders come out "blurry" and therefore require more samples to render, and thus you need more time to render it with the CPU. It's not on every scene, though, I observed this mostly on scenes with lots of volumetrics and vegetation.

                  This, however, doesn't happen on Redshift, VRay, or Cycles. So it's merely an Arnold limitation.

                  Originally posted by qarium View Post
                  and about support it is right that CUDA has superior support out of the box experience but if we talk about Blender even my 2017 dated Vega64 was Blender 3.3 support...
                  Can you post a video of you showing your GPU inside Blender 3.3? Did AMD release HIP for older cards? AFAIK Blender removed OpenCL support in v3.0.

                  Originally posted by qarium View Post
                  i am pretty sure with 61 TFLOPs​ this 7900XTX for 999 dollars will be a very very very good option for Blender.
                  Raw performance is useless if AMD doesn't up their game in the path-tracing scenario. Just look at their RX 7950XT card: basically up there with a 3090, but in RayTracing and 3D rendering it lacks performance - because AMD's efforts into RT-cores (or whatever name they use) is sadly not turning out too good for them, specially since they were a little late to the RayTracing party.


                  Comment


                  • #29
                    Originally posted by Amarildo View Post

                    The selection of people who can use HIP is (or, at least, was) limited to buyers of the 6000 series. That was a massive blow to AMD users who were on previous generations and had to deal with poor or no support on 3D programs. Not only AMD user suffered a lot up until now (like that bug where the entire computer freezes if you try rendering on Blender/Windows with OpenCL, that was never fixed since 2020), but only those who bought a 6000 series card can/could use HIP - and they have less performance than NVIDIA cards that cost less than half of what AMD is charging.

                    HIP is definitely the right move for AMD, but AMD should've made the technology available for as many generations as possible, considering the performance their users get even with the top-tier cards.
                    I'm using obscure 5000 dual GPU laptop pog

                    Comment


                    • #30
                      Originally posted by Amarildo View Post

                      Sorry, but you're wrong. Blender does support CUDA, it's an option anyone can select e.g. if OptiX isn't working. It's "slow" compared to OptiX, but it's an option. Also, the SheepIt renderfarm uses CUDA and not OptiX.
                      Dont bother replying, hes clearly talking out of his ass and has no idea of what he is talking about. he lives in some strange bubble lol.

                      Comment

                      Working...
                      X