Announcement

Collapse
No announcement yet.

Blender 3.0's Cycles X Rendering Performance Is Looking Great

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by stargeizer View Post
    Coders sure choose CUDA because when you make a struct FooBar{}; in CUDA, it works on both CPU-side and GPU-side, no pains, no complications, easy peasy.
    I've been enjoying this experience with wgpu in Rust: you create a (relatively) clean struct in Rust, and you can use it in your wgsl shader modules (which can have any number of entrypoints); you can also then run this on top of Vulkan, D3D12, or Metal (and, if you limit your features, you can also run it on top of GLES 3.0/ WebGL 2).
    Last edited by microcode; 24 November 2021, 04:24 PM.

    Comment


    • #12
      Originally posted by rmfx View Post
      Can Vulkan compute do the same as HIP/opencl/CUDA or it has strong limitations?
      Well, it's a different question in this case, since Apple sponsored a Metal backend; if it can be done with Metal, then it can almost certainly be done on Vulkan.

      Comment


      • #13
        Originally posted by microcode View Post

        I've been enjoying this experience with wgpu in Rust: you create a (relatively) clean struct in Rust, and you can use it in your wgsl shader modules (which can have any number of entrypoints); you can also then run this on top of Vulkan, D3D12, or Metal (and, if you limit your features, you can also run it on top of GLES 3.0/ WebGL 2).
        True, unfortunately rust still brings some overhead, and results are still quite slow for realtime applications compared with CUDA and C++ (it's the price to pay for memory safe operations, unfortunately), but i also think in around one or two more years can be more closer than is now. Also hopefully GPU should be more General oriented than are now in the future, post electronic shortage crisis, i think.

        (And we're talking about rendering in Blender, these users will sacrifice anything (and anyone) to gain every second of rendering speed :P )
        Last edited by stargeizer; 24 November 2021, 04:46 PM.

        Comment


        • #14
          Originally posted by tildearrow View Post

          You all forgot about Vulkan Compute.
          Yes we did, I wonder what happened to it? Is it even supported by mainstream drivers?

          Comment


          • #15
            Originally posted by cl333r View Post

            Yes we did, I wonder what happened to it? Is it even supported by mainstream drivers?
            It should be. It works fine on Mesa.

            Comment


            • #16
              Originally posted by stargeizer View Post
              True, unfortunately rust still brings some overhead
              False. I guess I can stop reading your reply right there lol. You clearly have NFI what you're talking about.

              Comment


              • #17
                Oh, yeah... i know what i'm talking about, and i should know better, since i use Rust as my money maker. Anyways, this is not about holy wars, so let's end it here. If anybody can't see the strengths and weakness of the tools they supposedly use (and how to workaround it's limitations, when applicable), then they don't really know their tools. Anybody can do faster code in any language, and anybody can do slow code in any language, but every language has it's own strenghts and weakness, and mastering that knowledge is what makes you a proficent coder, IMHO.

                Evangelizing is one thing i prefer to reserve to religions, not coding.

                (only a few months to retire....)
                Last edited by stargeizer; 25 November 2021, 12:56 PM.

                Comment


                • #18
                  Originally posted by amxfonseca View Post

                  For sure. Someone needs to sponsor and develop it. I don't think either Apple or Nvidia are going to do it, since there is no monetary incentive for them. And AMD seems to be investing on the HIP backend already, so it would be also wasteful for them, specially if they can't extract the required performance from it, last thing you want to is to develop a backend that will run better on your competitors hardware.

                  So any company that sells an high performance device that supports Vulkan can definitely support the development of a new backend.
                  With Intel attempting to get into discrete graphics again, perhaps they might sponsor a Vulkan Compute backend?

                  Comment


                  • #19
                    Originally posted by stargeizer View Post

                    True, unfortunately rust still brings some overhead, and results are still quite slow for realtime applications compared with CUDA and C++ (it's the price to pay for memory safe operations, unfortunately), but i also think in around one or two more years can be more closer than is now.
                    Just curious, do you have any benchmarks comparing the two?
                    Rust as a language should in theory not have any performance impact due to safety, as its safety mechanisms are zero-cost and rather just costs longer compilation times.
                    In practice things are of course not as black and white, but that's usually due to design choices rather than the language itself (and such mistakes can be made in any language).
                    Would be interesting to see what the bottleneck(s) of wgpu are for sure.

                    Comment

                    Working...
                    X