Announcement

Collapse
No announcement yet.

More Development Activity Ticking Up Around Vulkan For Blender

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by mirmirmir View Post

    I'm using obscure 5000 dual GPU laptop pog
    Wait, so HIP is now available for non-6000 series consumers? That's pretty cool
    Just saw here in Blender, it's enabled for RDNA GPU's (5000 and 6000 series GPU's). Nice.

    Comment


    • #32
      Originally posted by Amarildo View Post
      Sorry, but you're wrong. Blender does support CUDA, it's an option anyone can select e.g. if OptiX isn't working. It's "slow" compared to OptiX, but it's an option. Also, the SheepIt renderfarm uses CUDA and not OptiX.

      That is also not true. The fastest consumer-grade CPUs will probably never beat the fastest GPUs or even mid-range GPUs. Just so you can see how much of a difference there is: according to TechPowerUp, the newest 7950X renders the BMW scene in 63.7 seconds. Seems great, right? That is, until you compare it to a mid-range GPU like the RTX 3060 that renders the same scene in 13 seconds.

      Not only that, the same mid-range GPU renders the Classroom scene with about 600 samples/minute, while the 7950X renders it with 140.

      Also not true. CUDA is massively better than CPU rendering, even if you compare a CUDA-enabled GPU with a CPU that is more expensive. Let's continue our RTX 3060 vs 7950X comparison. For the Classroom benchmark, the 3060 (with CUDA) renders it in about 55 seconds (or around 300 samples/minute), more than double that of the 7950x.

      Nobody in the 3D industry avoids NVIDIA. In fact, it's the defacto choice for 3D Artists.

      This is a bit of a stretch. No professionals avoid NVIDIA and certainly not because of "lack of VRAM".

      There is something to be considered, obviously. For instance, if a big CGI studio (like Platige, Poland) uses Maya/Arnold for their rendering, there are certain limitations in Arnold itself that prevent them from using GPU's, like Arnold not being able to render particle streaks or BiFrost volumes with GPU's. This has nothing to do with NVIDIA or their VRAM, but are problems in Arnold that still need addressing. In addition, Arnold can be massively unstable when rendering via GPU's and their GPU render code can actually slow down the overall render process if you compare to a CPU render, because sometimes, somehow, the GPU renders come out "blurry" and therefore require more samples to render, and thus you need more time to render it with the CPU. It's not on every scene, though, I observed this mostly on scenes with lots of volumetrics and vegetation.

      This, however, doesn't happen on Redshift, VRay, or Cycles. So it's merely an Arnold limitation.

      Can you post a video of you showing your GPU inside Blender 3.3? Did AMD release HIP for older cards? AFAIK Blender removed OpenCL support in v3.0.

      Raw performance is useless if AMD doesn't up their game in the path-tracing scenario. Just look at their RX 7950XT card: basically up there with a 3090, but in RayTracing and 3D rendering it lacks performance - because AMD's efforts into RT-cores (or whatever name they use) is sadly not turning out too good for them, specially since they were a little late to the RayTracing party.
      only non-professionals think a RTX4090 with 24gb vram is good for 3D artists in blender because their small projects fit inside the 25GB vram...

      more professionals use EPIC servers or Threadrippers with 512gb ram or more.

      you are right that a RTX 4090 or even as you say a RTX 3060 is faster as a AMD Ryzen 7950X

      but thats not my point at all. Faster as you say are always benchmarkes with small projects who do in fact fit inside the VRAM.

      "Can you post a video of you showing your GPU inside Blender 3.3? Did AMD release HIP for older cards? AFAIK Blender removed OpenCL support in v3.0."

      yes OpenCL is removed in blender 3.1/3.2/3.3/3.4.-...

      and yes the ROCm/HIP solution is backported to vega64 and Radoen7 but still no Polaris support.

      "Arnold limitation."

      right there are many more reasons why professional avoid nvidia and use CPU rendering instead.

      "The fastest consumer-grade CPUs"

      i really don't know why you talk about fastest consumer grade cpu...

      if you see the benchmark results of the new 96core EPIC the benchmarks results are very very good even in blender.

      but the cpu is 10 000€ right now. but as you see the battle between 13900KS vs 7950X the cpu prices can drop very fast.

      i say this: if your projects fit inside the 24gb vram of an RTX 4090 with OptiX you are an happy man.

      but for me it looks like a Radeon 7900XTX for 999 dollars will have good results to in my point of view
      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • #33
        Originally posted by Amarildo View Post
        Wait, so HIP is now available for non-6000 series consumers? That's pretty cool
        Just saw here in Blender, it's enabled for RDNA GPU's (5000 and 6000 series GPU's). Nice.
        ROCm/HIP for Blender 3.3 or newer has some cards and generations who it works and some who don't work.

        Polaris for example does not work.
        these RX5700 cards are also not working people say.

        what works is the radeon 6000 cards and Vega64/Radeon7 ...

        but very new info these days is that amd do want to drop support for vega cards to.

        this means on consumer cards if a card is 5 years or older amd no longer invest developers time on it.
        Phantom circuit Sequence Reducer Dyslexia

        Comment


        • #34
          Originally posted by qarium View Post
          only non-professionals think a RTX4090 with 24gb vram is good for 3D artists in blender because their small projects fit inside the 25GB vram...
          I am a professional 3D Artst.

          Originally posted by qarium View Post
          you are right that a RTX 4090 or even as you say a RTX 3060 is faster as a AMD Ryzen 7950X, but thats not my point at all. Faster as you say are always benchmarkes with small projects who do in fact fit inside the VRAM.
          Because 24 GB of plenty of VRAM for the vast majority of freelancers, and VRAM is not a limitation for studios.

          Also, you're probably thinking big studios (the ones that actually need lots of memory) would use a single 3090 to render their scenes (or even worse, to render each frame). That is comical, honestly ;-)

          My best guess is that you don't know that you can connect multiple GPU's via NVLink and have them share memory. That's fine, not everyone knows that.

          Originally posted by qarium View Post
          more professionals use EPIC servers or Threadrippers with 512gb ram or more.
          Can you post the source to this information and clarify what you mean by "professionals"? Because of all CG professionals I've EVER come across, all recommend NVIDIA GPU's - unless their render with Arnold ;-)

          It depends on the environment. For instance, Digital Doman uses Redshift and VRay. Redshift is 100% GPU-based, and while VRay can work with CPU's, I doubt they use CPU's on VRay.
          And I know the technical director at Platige Studio (he's actually kind of my mentor). Platige renders everything on CPU since they render using Arnold (and, as mentioned, Arnold itself still isn't perfect with GPU rendering).

          Originally posted by qarium View Post
          "Arnold limitation." Right there are many more reasons why professional avoid nvidia and use CPU rendering instead.
          (citation needed)

          Originally posted by qarium View Post
          "The fastest consumer-grade CPUs" i really don't know why you talk about fastest consumer grade cpu...
          Because we were talking about "consumer-grade" GPU's. I hadn't even touched on professional-grade GPU's.
          Last edited by Amarildo; 16 November 2022, 10:03 AM.

          Comment


          • #35
            Originally posted by Amarildo View Post
            I am a professional 3D Artst.
            Because 24 GB of plenty of VRAM for the vast majority of freelancers, and VRAM is not a limitation for studios.
            Also, you're probably thinking big studios (the ones that actually need lots of memory) would use a single 3090 to render their scenes (or even worse, to render each frame). That is comical, honestly ;-)
            My best guess is that you don't know that you can connect multiple GPU's via NVLink and have them share memory. That's fine, not everyone knows that.
            Can you post the source to this information and clarify what you mean by "professionals"? Because of all CG professionals I've EVER come across, all recommend NVIDIA GPU's - unless their render with Arnold ;-)
            It depends on the environment. For instance, Digital Doman uses Redshift and VRay. Redshift is 100% GPU-based, and while VRay can work with CPU's, I doubt they use CPU's on VRay.
            And I know the technical director at Platige Studio (he's actually kind of my mentor). Platige renders everything on CPU since they render using Arnold (and, as mentioned, Arnold itself still isn't perfect with GPU rendering).
            (citation needed)
            Because we were talking about "consumer-grade" GPU's. I hadn't even touched on professional-grade GPU's.
            NVLink right amd has similar technology. believe it or not but i know this
            but you said RTX 3060 and i am sure this card does not have NVLink all the Geforce cards don't have this or don't advertise this or do not have driver support for this.
            the RTX4090 for example has it on the PCB board but has no pins soldered on it and does not advertise it and the driver of geforce also does not support it. this NVLink feature is for the professional products only.
            also what you dare not to say is this: even if you increase the amount of vram you can use to 48gb if you have 2 cards the performance over NVLink goes down because the link is a bottleneck...
            so first you say you do GPU rendering because it is faster then you say you want to use NVLink to compete high amount of RAM CPU systems but then your GPU is no longer so much faster.

            and there is a dimension we did not talk about yet if you are a 3D artist but you want a opensource solution then i think you better stay on the CPU rendering path until AMD has a useable OptiX alternative in opensource.

            "Because we were talking about "consumer-grade" GPU's"

            yes show me the NVLink on consumer grade gpus.
            Phantom circuit Sequence Reducer Dyslexia

            Comment

            Working...
            X