Announcement

Collapse
No announcement yet.

Blender 3.0's Cycles X Rendering Performance Is Looking Great

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
    johanb
    Senior Member

  • johanb
    replied
    Originally posted by stargeizer View Post

    True, unfortunately rust still brings some overhead, and results are still quite slow for realtime applications compared with CUDA and C++ (it's the price to pay for memory safe operations, unfortunately), but i also think in around one or two more years can be more closer than is now.
    Just curious, do you have any benchmarks comparing the two?
    Rust as a language should in theory not have any performance impact due to safety, as its safety mechanisms are zero-cost and rather just costs longer compilation times.
    In practice things are of course not as black and white, but that's usually due to design choices rather than the language itself (and such mistakes can be made in any language).
    Would be interesting to see what the bottleneck(s) of wgpu are for sure.

    Leave a comment:

  • PlanetVaster
    Junior Member

  • PlanetVaster
    replied
    Originally posted by amxfonseca View Post

    For sure. Someone needs to sponsor and develop it. I don't think either Apple or Nvidia are going to do it, since there is no monetary incentive for them. And AMD seems to be investing on the HIP backend already, so it would be also wasteful for them, specially if they can't extract the required performance from it, last thing you want to is to develop a backend that will run better on your competitors hardware.

    So any company that sells an high performance device that supports Vulkan can definitely support the development of a new backend.
    With Intel attempting to get into discrete graphics again, perhaps they might sponsor a Vulkan Compute backend?

    Leave a comment:

  • stargeizer
    Junior Member

  • stargeizer
    replied
    Oh, yeah... i know what i'm talking about, and i should know better, since i use Rust as my money maker. Anyways, this is not about holy wars, so let's end it here. If anybody can't see the strengths and weakness of the tools they supposedly use (and how to workaround it's limitations, when applicable), then they don't really know their tools. Anybody can do faster code in any language, and anybody can do slow code in any language, but every language has it's own strenghts and weakness, and mastering that knowledge is what makes you a proficent coder, IMHO.

    Evangelizing is one thing i prefer to reserve to religions, not coding.

    (only a few months to retire....)
    stargeizer
    Junior Member
    Last edited by stargeizer; 25 November 2021, 12:56 PM.

    Leave a comment:

  • microcode
    Senior Member

  • microcode
    replied
    Originally posted by stargeizer View Post
    True, unfortunately rust still brings some overhead
    False. I guess I can stop reading your reply right there lol. You clearly have NFI what you're talking about.

    Leave a comment:

  • tildearrow
    Senior Member

  • tildearrow
    replied
    Originally posted by cl333r View Post

    Yes we did, I wonder what happened to it? Is it even supported by mainstream drivers?
    It should be. It works fine on Mesa.

    Leave a comment:

  • cl333r
    Senior Member

  • cl333r
    replied
    Originally posted by tildearrow View Post

    You all forgot about Vulkan Compute.
    Yes we did, I wonder what happened to it? Is it even supported by mainstream drivers?

    Leave a comment:

  • stargeizer
    Junior Member

  • stargeizer
    replied
    Originally posted by microcode View Post

    I've been enjoying this experience with wgpu in Rust: you create a (relatively) clean struct in Rust, and you can use it in your wgsl shader modules (which can have any number of entrypoints); you can also then run this on top of Vulkan, D3D12, or Metal (and, if you limit your features, you can also run it on top of GLES 3.0/ WebGL 2).
    True, unfortunately rust still brings some overhead, and results are still quite slow for realtime applications compared with CUDA and C++ (it's the price to pay for memory safe operations, unfortunately), but i also think in around one or two more years can be more closer than is now. Also hopefully GPU should be more General oriented than are now in the future, post electronic shortage crisis, i think.

    (And we're talking about rendering in Blender, these users will sacrifice anything (and anyone) to gain every second of rendering speed :P )
    stargeizer
    Junior Member
    Last edited by stargeizer; 24 November 2021, 04:46 PM.

    Leave a comment:

  • microcode
    Senior Member

  • microcode
    replied
    Originally posted by rmfx View Post
    Can Vulkan compute do the same as HIP/opencl/CUDA or it has strong limitations?
    Well, it's a different question in this case, since Apple sponsored a Metal backend; if it can be done with Metal, then it can almost certainly be done on Vulkan.

    Leave a comment:

  • microcode
    Senior Member

  • microcode
    replied
    Originally posted by stargeizer View Post
    Coders sure choose CUDA because when you make a struct FooBar{}; in CUDA, it works on both CPU-side and GPU-side, no pains, no complications, easy peasy.
    I've been enjoying this experience with wgpu in Rust: you create a (relatively) clean struct in Rust, and you can use it in your wgsl shader modules (which can have any number of entrypoints); you can also then run this on top of Vulkan, D3D12, or Metal (and, if you limit your features, you can also run it on top of GLES 3.0/ WebGL 2).
    microcode
    Senior Member
    Last edited by microcode; 24 November 2021, 04:24 PM.

    Leave a comment:

  • stargeizer
    Junior Member

  • stargeizer
    replied
    Coders sure choose CUDA because when you make a struct FooBar{}; in CUDA, it works on both CPU-side and GPU-side, no pains, no complications, easy peasy.

    Vulkan / OpenCL / anything not CUDA, don't have any data-structure sharing like that with the host code. Its a point of contention that makes anything more complicated than 3-dimensional arrays hard to share, requiring lots of workarounds, and you need to consider what are the hard limits of GPU to use.

    Yeah, Vulkan / OpenCL have all sorts of pointer-sharing arrangements (Shared Virtual Memory), but its difficult to use in practice, because they keep the concepts of "GPU" code separate from "CPU" code. CUDA doesn't have that weakness, so there's no limitations of what you can do in a GPU, code can be battletested faster, maintenance is lightyears easier and people can go to their homes earlier in the night.

    That's why nVIDIA won the compute wars years ago, and that's why other manufacturers are years behind of work to just play catch-up, and given the actual state of Vulkan tooling, nvidia has no reason to worry about. And Apple doesn't want to play this tune again, that's for sure. (Honestly, i was hoping Intel One API was a game changer, but is focused more on the datacenter than anything else).
    stargeizer
    Junior Member
    Last edited by stargeizer; 24 November 2021, 01:39 PM.

    Leave a comment:

Working...
X