Announcement

Collapse
No announcement yet.

AMD Releases HIP RT 2.2 With Multi-Level Instancing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Releases HIP RT 2.2 With Multi-Level Instancing

    Phoronix: AMD Releases HIP RT 2.2 With Multi-Level Instancing

    AMD's GPUOpen crew today released HIP RT 2.2 as the newest version of this ray-tracing library for HIP...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    AMD engineers have also been working to get the PBRT-v4 ray-tracer running on AMD GPUs. With their PBRT-v4 fork they have now managed to successfully get the popular Disney Moana Island Scene rendered using HIP RT with a Radeon PRO W7900 graphics card
    Our team also worked on and managed to run PBRT-v4 on AMD GPUs. Specifically, it has another GPU backend, which was ported to HIP and HIPRT. Our fork of PBRT-v4 can be found here. The multi-level instancing is one of the important features that allows the rendering on a GPU with limited VRAM. The image below is the Moana Island Scene rendered on the AMD Radeon™ PRO W7900 with 48GB VRAM, in-core. This scene is organized into three levels with 156 unique primitives and 31 billion instantiated primitives:
    To put this into perspective, it is 2024 and AMD engineers have finally managed to render this scene on a GPU that costs about 4 grand or $5400, if you buy it from System76.

    With an Nvidia video card, you could do this 2.5 years ago:

    After our extended tour through where pbrt-v4 spends its time getting ready to render the Moana Island scene, we finally look at rendering, comparing perform...


    Oh, and about rendering performance? It’s 26.7 seconds on the GPU (an NVIDIA RTX A6000) versus 326.5 seconds on the CPU (a 32 core AMD 3970X). Work out the division and that’s 12.2x faster on the GPU. If you’d prefer a clean 2048 sample per pixel rendering, the GPU gets through that in 215.6 seconds, once again over 12x faster than the CPU doing the same.
    And the author points out that the GPU rendered version is a higher quality version in so far as it is the more accurate render, which dispels another myth that software rendering is of higher quality.

    The Nvidia card is about $4200, the scene requires about 29GB vram for a 1920x1080 render and in all honesty you will not be seeing this down for a complete game anytime soon when you consider vram usage and render time.

    BTW AMD team, way to go. It only took you 2.5 years to accomplish what Nvidia users already could.

    Comment


    • #3
      Originally posted by sophisticles View Post



      To put this into perspective, it is 2024 and AMD engineers have finally managed to render this scene on a GPU that costs about 4 grand or $5400, if you buy it from System76.

      With an Nvidia video card, you could do this 2.5 years ago:

      After our extended tour through where pbrt-v4 spends its time getting ready to render the Moana Island scene, we finally look at rendering, comparing perform...




      And the author points out that the GPU rendered version is a higher quality version in so far as it is the more accurate render, which dispels another myth that software rendering is of higher quality.

      The Nvidia card is about $4200, the scene requires about 29GB vram for a 1920x1080 render and in all honesty you will not be seeing this down for a complete game anytime soon when you consider vram usage and render time.

      BTW AMD team, way to go. It only took you 2.5 years to accomplish what Nvidia users already could.
      Right. Is there any connection to programs like Blender? The program - you still can't use HIP-RT with in Linux? So, exactly - what is this announcement for - you pointed out the facts - that Nvidia could already do this rendering - AMD fanboys here will just say Nvidia had a head start and pays these software companies off - to optimize with OptiX or whatever tech. Nvidia uses. I have heard all that before.

      It still doesn't change the fact that AMD's progress in these fields are akin to a sloth climbing a tree. Or at a snail's pace. Pick your expression.

      Comment


      • #4
        Originally posted by sophisticles View Post
        And the author points out that the GPU rendered version is a higher quality version in so far as it is the more accurate render, which dispels another myth that software rendering is of higher quality.
        To be fair, GPU rendering *IS* software rendering, simply using a different processing unit. Unless it has dedicated ASIC hardware like encoders and decoders, if it runs on the standard CUDA cores then it's generic software rendering, just "GPU accelerated". Also, the main difference in speed and quality is due to the entirely different way that the GPU render path handles textures and related curves. The speed increase is probably at least partially related to the fact that the GPU straight up skips many textures. As for quality... that's debatable. It does better on some shadows in the first image rendered, but the sheer lack of proper texture support makes parts of it look awful and not accurate at all.

        TL;DR, the GPU rendered image will probably take longer (but still be faster) once it's texture issues are fixed, and the CPU rendered image will probably look better once the CPU rendering code is fixed.

        Originally posted by sophisticles View Post
        BTW AMD team, way to go. It only took you 2.5 years to accomplish what Nvidia users already could.
        AMD is generally behind in a lot of ways, but this is hardly a "could vs couldn't" scenario. The guy who made the render test not only wrote it specifically for NVidia GPUs, but also had to write custom rendering codepaths to do it. If anybody with an AMD gpu cared enough, they probably could have ported it over to AMD GPUs fairly quickly but nobody cares about a random test some guy with a blog made in his free time.

        Comment


        • #5
          Originally posted by Panix View Post
          Right. Is there any connection to programs like Blender? The program - you still can't use HIP-RT with in Linux? So, exactly - what is this announcement for - you pointed out the facts - that Nvidia could already do this rendering - AMD fanboys here will just say Nvidia had a head start and pays these software companies off - to optimize with OptiX or whatever tech. Nvidia uses. I have heard all that before.

          It still doesn't change the fact that AMD's progress in these fields are akin to a sloth climbing a tree. Or at a snail's pace. Pick your expression.
          AMD does not want to make any progress it's not in their best interest.

          AMD's approach to CPUs is more cores, they will be releasing a 192C/384T TRm AMD does not want to sell you a $4000 video card that's 12x faster than a comparable CPU/motherboard combo and they can keep selling faster CPUs with more cores every year.

          Nvidia is kind of forcing AMD to go through the paces of pretending they care about GPU acceleration.

          Comment


          • #6
            Originally posted by Daktyl198 View Post
            To be fair, GPU rendering *IS* software rendering, simply using a different processing unit. Unless it has dedicated ASIC hardware like encoders and decoders, if it runs on the standard CUDA cores then it's generic software rendering, just "GPU accelerated". Also, the main difference in speed and quality is due to the entirely different way that the GPU render path handles textures and related curves. The speed increase is probably at least partially related to the fact that the GPU straight up skips many textures. As for quality... that's debatable. It does better on some shadows in the first image rendered, but the sheer lack of proper texture support makes parts of it look awful and not accurate at all.

            TL;DR, the GPU rendered image will probably take longer (but still be faster) once it's texture issues are fixed, and the CPU rendered image will probably look better once the CPU rendering code is fixed.
            You are right about GPU rendering being software rendering just on a different chip, in fact i have argued this very point a number of times here, but since most people think hardware when they hear GPU, i figured why split hairs.

            With regards to quality, this is a debate i have had with numerous people over the years with regards to video encoding.

            Do you define quality as which looks better or do you define it as which is closer to the original?

            Originally posted by Daktyl198 View Post
            AMD is generally behind in a lot of ways, but this is hardly a "could vs couldn't" scenario. The guy who made the render test not only wrote it specifically for NVidia GPUs, but also had to write custom rendering codepaths to do it. If anybody with an AMD gpu cared enough, they probably could have ported it over to AMD GPUs fairly quickly but nobody cares about a random test some guy with a blog made in his free time.
            The rendering test is an industry standard test:

            This data set contains everything necessary to render a version of the Motunui island featured in the 2016 film Moana.


            This data set contains everything necessary to render a version of the Motunui island featured in the 2016 film Moana. The scene is chosen to represent some of the challenges we currently encounter in a typical production environment. Most notably it includes large amounts of geometry created through instancing as well as complex volumetric light transport.

            There are many other challenges which are frequently encountered in production rendering and which are not represented in this scene (examples include motion blur and a large number of light sources to name just two). Still, we hope that this will be a useful dataset for developing, testing and benchmarking new rendering algorithms.
            ​There is a lot of interest on speeding up the rendering of this scene because Disney makes a lot of money from its animated films but also spends a lot of money in producing them,

            If they can speed up the rendering time it would significantly reduce expenditures.

            Family Guy on one of their episodes years ago animated a pink elephant for 10 seconds and they claimed it cost them 50 thousand dollars to do it and they did it to waste Fox's money, lol.
            Last edited by sophisticles; 30 January 2024, 11:28 PM.

            Comment


            • #7
              Originally posted by sophisticles View Post
              Do you define quality as which looks better or do you define it as which is closer to the original?
              Closer to the original. That shouldn't really be a debate. Subjective editing is entirely allowed, and shouldn't be judged. But objective quality, especially when it comes to encoding/decoding, should be based on how close it is to the raw data.

              But again, the GPU rendering wins in texture shadows, but loses in displaying some textures at all. So it's kind of a wash for now.


              Originally posted by sophisticles View Post
              The rendering test is an industry standard test:

              This data set contains everything necessary to render a version of the Motunui island featured in the 2016 film Moana.


              ​There is a lot of interest on speeding up the rendering of this scene because Disney makes a lot of money from its animated films but also spends a lot of money in producing them,

              If they can speed up the rendering time it would significantly reduce expenditures.
              I wouldn't consider that an "industry test" so much as a test put out by somebody in the industry. Splitting hairs though, as you said. It's definitely something worth investing time and effort into, but not really for AMD as most rendering software refuses to even attempt to support their GPGPU capabilities and instead target CUDA specifically. That's why AMD wrote their CUDA-to-OpenCL compiler, but nobody even bothers to use that in their software.

              AMD just isn't popular in the rendering space. So even though somebody could have taken that GPU rendering implementation of Moana island and converted it to work on AMD hardware, it likely just wasn't something anybody cared to do. As mentioned, it's basically just GPGPU software rendering on the GPU so it should be possible. It'd likely be slower than the NVidia counterpart, but that's kinda standard. I guess my point is just that it's not like AMD couldn't do it for 2.5 years, it's just that nobody cared to try because there's no point even if they do lol.

              Comment


              • #8
                Originally posted by Panix View Post
                Right. Is there any connection to programs like Blender? The program - you still can't use HIP-RT with in Linux?
                HIP-RT in Blender is there in the Linux code, but doesn't always work. According to the latest Render & Cycles meeting notes, fixing HIP-RT support on Linux is waiting for HIP-RT to be open sourced.

                Which I think implies that it will be open sourced soon, a nice thing in itself.

                Comment


                • #9
                  Originally posted by sophisticles View Post
                  ​There is a lot of interest on speeding up the rendering of this scene because Disney makes a lot of money from its animated films but also spends a lot of money in producing them,
                  Not in recent years, pretty much everything Disney Animation and Pixar released in the last couple of years bombed. Illumination and Fortiche are still making bank, they're basically the last beacons of Western animation at this point (they're not really known for pushing budgets or technology, though). And Asia barely uses CGI. It's more about the VFX industry these days.

                  Comment


                  • #10
                    Originally posted by sophisticles View Post
                    To put this into perspective, it is 2024 and AMD engineers have finally managed to render this scene on a GPU that costs about 4 grand or $5400, if you buy it from System76.
                    With an Nvidia video card, you could do this 2.5 years ago:
                    After our extended tour through where pbrt-v4 spends its time getting ready to render the Moana Island scene, we finally look at rendering, comparing perform...

                    And the author points out that the GPU rendered version is a higher quality version in so far as it is the more accurate render, which dispels another myth that software rendering is of higher quality.
                    The Nvidia card is about $4200, the scene requires about 29GB vram for a 1920x1080 render and in all honesty you will not be seeing this down for a complete game anytime soon when you consider vram usage and render time.
                    BTW AMD team, way to go. It only took you 2.5 years to accomplish what Nvidia users already could.
                    I did buy a AMD PRO w7900 it did cost 4000€ and the cheapest offer i have seen was 3960€
                    if the rendering needs 29GB then the W7900 fits well it has 48GB vram.

                    it is true that AMD laga behind and it is also true that it is 2,5 years but the positive point is: the 2,5 years already passed and the future looks good.
                    Phantom circuit Sequence Reducer Dyslexia

                    Comment

                    Working...
                    X