Announcement

Collapse
No announcement yet.

AMD HIP vs. NVIDIA CUDA vs. NVIDIA OptiX On Blender 3.2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by reba View Post
    and is otherwise disabled and stuffed away somewhere where it does not interfere when not.
    Aren't external Displays routed over the dGPU and won't work if you turn it off?

    Comment


    • #22
      Originally posted by pierce View Post
      It is genuinely sad not to see the vendor neutral OpenCL there.
      Perhaps, but Nvidia killed OpenCL 3.0 with CUDA and vendor lock in... and OpenCL failed to progress any further its basically a dead very buggy API.

      In this case the best strategy is to embrace, extend and open source.

      Comment


      • #23
        Originally posted by Anux View Post
        Aren't external Displays routed over the dGPU and won't work if you turn it off?
        I think it depends on how the hardware is designed.
        For this machine it's true - both USB-C DisplayPorts are connected to the Nvidia card; so it it's turned off, no chance for display (AFAIK and tried out)

        When I want to use an external display, I have to switch it on AND use X11 as support for Nvidia's Wayland implementation is not very good (or the other way around).
        I seldomly use the external display as it would come with a +20W power usage and a fallback to X11.
        The X11 configuration has to be tweaked (which is the primary card, which monitors/screens connect to which card) and xrandr is needed for correct display layout (which goes right, which goes left, which is primary). But it's entirely possible to use an external display (at a cost).

        So my impression is: an all-AMD system is still heaps and bounds better for overall desktop usage and the Nvidia is truly just a compute/gfx co-processor with quirks when used outside of this restricted scope.
        Last edited by reba; 15 June 2022, 06:45 AM.

        Comment


        • #24
          About what I expected, Seems like a good first step for AMD. Hopefully we will see some work put into optimization now.
          On the nVidia side it's interesting that the 3060 on optix out competes the 3090 on CUDA.

          The 6400 looks like it could really benefit from having a cooler that takes up more than a single slot. Curious as to what the power usage of that card was.

          Comment


          • #25
            Originally posted by GraysonPeddie View Post

            I would also want to go with NVIDIA GPU for OBS as well. There isn't a hardware encoder made for AMD GPUs the last time I did a video recording.
            HW encoding works on AMD GPUs via OBS' FFMPEG VA-API encoder.

            Comment


            • #26
              Originally posted by elatllat View Post

              I thought 4.4x was much worse;

              Phoronix: Blender 3.2 Performance With AMD Radeon HIP vs. NVIDIA GeForce On Linux This week's release of Blender 3.2 brings AMD GPU rendering support on Linux via AMD's HIP interface in conjunction with their ROCm compute stack. Eager to see the AMD GPU support on Linux finally arrive, I quickly began trying out this new
              Isn't that based off Optix? The OP here specifically said CUDA.

              Comment


              • #27
                This is the first benchmark I've seen with the Navi 22, Navi 23 and Navi 24 cards on HIP. Those architectures are not officially supported in the ROCm math libraries, so I don't have easy access to that hardware. It's really great to see a thorough benchmark of the HIP compute stack on consumer GPUs. AMD has a lot of work left to do in order to match NVIDIA on this front, but these results are an encouraging start. Thanks, Michael!

                Comment


                • #28
                  Originally posted by aufkrawall View Post
                  Looks like the CUDA option has become mostly superfluous
                  wrong it is not superfluous because only CPU and CUDA rendering is 100% mathematical exact the same output.

                  with OptiX you get an output but it is not the same output.

                  if it is not the same output then it is not a good benchmark.
                  Phantom circuit Sequence Reducer Dyslexia

                  Comment


                  • #29
                    Originally posted by Mathias View Post
                    But is there even a reason to not use Optix? Does Optix support everything Cycles/Cuda does? I know AMD Prorender doesn't support everything.
                    the output of optix is not the same as cpu or cuda renderer.
                    Phantom circuit Sequence Reducer Dyslexia

                    Comment

                    Working...
                    X