Announcement

Collapse
No announcement yet.

Blender Planning Vulkan Support This Year, Other Exciting Improvements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by piotrj3 View Post
    To replace CUDA? in terms of compute probably it won't matter or even will be worse.
    Why so?

    Comment


    • #12
      Originally posted by piotrj3 View Post
      To replace CUDA? in terms of compute probably it won't matter or even will be worse.
      There are multiple examples of vulkan compute out preforming cuda versions of apps. it could go either way, and as I said, before, might even be scene specific

      Comment


      • #13
        Originally posted by tildearrow View Post

        Why so?
        Performance wise, Vulkan is only slighty faster in most workloads then OpenCL, and on average CUDA was already significantly faster. That being said recently it might improved in Vulkan favour, but CUDA all the time gets improvements (over last 2 gens, gains in compute power of Nvidia GPUs in CUDA like workloads grew way faster then gaming performance). My knowdlege there however could be significantly outdated.

        I would more say it is more an aspect which backend will be written better.

        Also i am not sure if vulkan raytracing extensions can be truly used to create render highest quality image, as far as i know Radeon ray 4.0 uses it, but had significant image artifacts. However simple hybrid aproach raytracing for some fast render or viewport could be amazing.
        Last edited by piotrj3; 20 April 2021, 04:55 PM.

        Comment


        • #14
          Originally posted by piotrj3 View Post

          Performance wise, Vulkan is only slighty faster in most workloads then OpenCL, and on average CUDA was already significantly faster. That being said recently it might improved in Vulkan favour, but CUDA all the time gets improvements (over last 2 gens, gains in compute power of Nvidia GPUs in CUDA like workloads grew way faster then gaming performance).

          I would more say it is more an aspect which backend will be written better.
          I have tried waifu2x-converter (OpenCL) against waifu2x-ncnn-vulkan (Vulkan) and the NCNN version was two times faster than the Converter one.

          Please don't kill my hope on Vulkan... I am tired of this CUDA monopoly.
          So many nice compute apps I cannot run only because I own an AMD card.

          Comment


          • #15
            Originally posted by tildearrow View Post

            I have tried waifu2x-converter (OpenCL) against waifu2x-ncnn-vulkan (Vulkan) and the NCNN version was two times faster than the Converter one.

            Please don't kill my hope on Vulkan... I am tired of this CUDA monopoly.
            So many nice compute apps I cannot run only because I own an AMD card.
            Waifu2x Vulkan preforms about 1.5x faster than cuda on a gtx 1050ti, Cuda isn't inherrently faster or slower than vulkan compute. it depends on the workload. though cuda is much more well used, and understood by devs, so most will keep using cuda unless they have a good reason for them not to.

            Comment


            • #16
              Originally posted by tildearrow View Post
              Wait, so this has nothing to do with rendering?
              Come on, I thought Blender was working on a Vulkan Cycles renderer......
              EEVEE is a renderer, and from what I've gathered, the Vulkan API being lower level than OpenGL opens up a lot of possibilities beyond performance, allowing it to get results closer to a raytracing renderer like Cycles than what is currently possible.

              Comment


              • #17
                Originally posted by Quackdoc View Post

                Waifu2x Vulkan preforms about 1.5x faster than cuda on a gtx 1050ti, Cuda isn't inherrently faster or slower than vulkan compute. it depends on the workload. though cuda is much more well used, and understood by devs, so most will keep using cuda unless they have a good reason for them not to.
                If only there was a CUDA on AMD runtime... (no, the AMD-provided tools to convert CUDA to HSA aren't enough)

                Comment


                • #18
                  Originally posted by tildearrow View Post

                  Wait, so this has nothing to do with rendering?
                  Come on, I thought Blender was working on a Vulkan Cycles renderer......
                  As far as I can see their only Vulkan plan related to rendering is to replace OpenGL with Vulkan in their EEVEE engine, but I barely use that one...

                  Anyway I'm not worrying about Vulkan Ray Tracing in Cycles for now. Why should I care about the software when I can't buy the hardware

                  Comment


                  • #19
                    Originally posted by tildearrow View Post

                    I have tried waifu2x-converter (OpenCL) against waifu2x-ncnn-vulkan (Vulkan) and the NCNN version was two times faster than the Converter one.

                    Please don't kill my hope on Vulkan... I am tired of this CUDA monopoly.
                    So many nice compute apps I cannot run only because I own an AMD card.
                    AMD's ROCm is a really confusing move to me.
                    That platform requires tons of extra work but delivers worse usability that their existing OpenCL runtime system.
                    If they focused on Vulkan Compute as their HIP's backend the situation would be much better, at least we don't need to wait for years before we can compute anything on RDNA

                    Comment


                    • #20
                      Originally posted by Grinch View Post
                      EEVEE is a renderer, and from what I've gathered, the Vulkan API being lower level than OpenGL opens up a lot of possibilities beyond performance, allowing it to get results closer to a raytracing renderer like Cycles than what is currently possible.
                      Closer, as you said.

                      Not accurate though...

                      Comment

                      Working...
                      X