Announcement

Collapse
No announcement yet.

Blender 4.1 Released With Faster Linux CPU Rendering & AMD RDNA3 APU Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by tenchrio View Post

    If you read over on Nvidia's support forum - user questions and comments regarding Omniverse RTX Renderer, there's a lot of issues and ppl are having problems with it. Those threads - practically all of them go nowhere and they die - suggesting no solution or fix was found.

    No seriously it happens there too, some questions have had 0 replies since April 2023 or even earlier.
    But all of that is irrelevant, we went over this before, if you look for negativity you will find it AMD, Intel or Nvidia doesn't matter. Someone will experience crashes for a reason that might not be a bug. Looking at the Radeon Prorender support forum I already see so many questionable topics, like out of date Blender versions (so no LTS or blender version post 2023 despite being asked last week) or a lack of information like driver version or even Windows version (no not just 10, I mean like 10 21h2).

    And even for Nvidia's RTX renderer forum this goes, some questions are also just nonsensical like "Using RTX Renderer as a library"..... what?! It's a render engine, this question doesn't make sense so no wonder that nobody, not even the Nvidia staff have touched it since March 28 2023.


    I think I know what benchmarks you are talking about and that was only true for HIP-RT, normal HIP worked fine in every benchmark and it was also on Blender 3.6 not the previously released 4.0 for which I still haven't found a proper deep dive (RIP Techgage).

    It should be noted Blender 4.0 saw a performance regression that in theoretical numbers hit Nvidia the hardest and only AMD's RX 7900XTX saw an uplift in performance.
    Actual render times are hard to find (and I explained before why the Opendata Benchmark and its scoring system are extremely questionable even when addressing it solely using Nvidia cards, even to this date somehow the RTX 4080 still beats the 4080 Super and the RTX 3090 still beats its TI version by 100s according to this benchmark).


    Intel's OpenGL performance is still very lacking and Blender once again delayed Eevee-Next which would bring Vulkan.
    This opengl problem affects both Eevee and the viewport, in Wireframe mode Intel's A770 had about 1/7th the performance of the RX7600 in Blender 3.6.
    It is pretty obvious you are just looking to shit on AMD (again) since I have told you this before as well as how you can circumvent this by using a different render engine like Hydra Storm (which comes from an AMD made extension) which uses Vulkan (for any Intel users reading along, this does have its downsides if you are planning to do the final render in Cycles, not all Blender Material nodes, specifically the procedural ones, are supported so they won't show up in Material Preview mode unless you bake them first with Cycles) or downloading the daily/dev build for Blender which ships with Eevee-Next in its current form.

    If somehow AMD using other Render engines isn't allowed why does Intel get a free pass on an otherwise very serious performance issue?


    I'm sorry what does "the only *CHANCE* for AMD to make any showing in Blender" even mean?
    Like it shows up the Blender settings menu if you enable it under HIP if that is what you are implying.

    We have been over this as well, Blender isn't just the cycles render engine. If you use it as a modeling or sculpting tool AMD cards would be ideal since they offer far better viewport performance for a lower price and those use cases aren't niche; Blender has good interoperability with Unity, Unreal Engine and Godot to the point those can import straight from Blend files. Multiple courses exist detailing workflows from Blender to one of those game engines.
    Also if you are going to bring up the pugetsystems Unreal Engine benchmark you better bring up the Rasterization performance and not just the RT performance which is off by default or the combined score which just exists for reference not actual use case. It also better be the recent one with the RTX super.
    LOL!
    The difference is, Nvidia has some support for these programs and it is adapted for the hardware - makes use of the ray tracing/tensor cores - it's not in 'development' or 'experimental' status for forever.....Zzzzzzzzzzzzzzzzz....

    Tons of info/videos/tutorials out there for Nvidia's Renderer.
    In this tutorial we are going to create a Cinematic Environment using the USD Composer in Nvidia Omniverse. NVIDIA Omniverse™ USD Composer is an Omniverse ap...

    In this video, we're going to show you the best render settings for NVIDIA Omniverse.By using the right render settings, you'll be able to produce high-quali...


    I already said that Intel has only had a rudimentary start/intro to Rendering - and their gpus seem to be catching up faster than AMD's.

    At least, if we are talking about discrete/dedicated hardware.

    Yes, we talked about it before and you use the same excuses. Again, the majority of those in the industry or supporting users - i.e. sell hardware and consult - they promote Nvidia gpus. Casual users tend to pick Nvidia gpus. Only fanboys and shills push AMD gpus. I would like AMD to catch up or do something in this sphere - that's why I'm awaiting the benchmarks here. I suspect AMD won't be up to task - the few benchmarks or tests with AMD gpus do tend to indicate that trend continuing. The RTX 4080 Super is still outperforming anything from AMD.

    The other intangible here - is that when the AMD gpu HIP actually *works* (i.e. the render starts - doesn't crash - the program runs) - there's still reports of renders not completing or issues - perhaps, that has been fixed but it's the last thing I heard with ppl discussing current gen cards and Blender (HIP).


    Comment


    • #22
      Originally posted by mirmirmir View Post
      I love amd but, while Nvidia, intel and apple provide denoising support, amd provides... Basic support on their customer hardware??
      Post sponsored by….

      Comment


      • #23
        Originally posted by Panix View Post
        LOL!
        The difference is, Nvidia has some support for these programs and it is adapted for the hardware - makes use of the ray tracing/tensor cores - it's not in 'development' or 'experimental' status for forever.....Zzzzzzzzzzzzzzzzz....
        Radeon Prorender is also a finished product and so is HIP basically, the only thing that applies is HIP RT. Your fanboy side is showing again,lol.

        Originally posted by Panix View Post
        Tons of info/videos/tutorials out there for Nvidia's Renderer.
        .....what does this have to do with anything? You argued AMD Prorender was bad because they didn't respond to every help article on their forums, my point was that Nvidia (and other major corporations) do the same.

        Originally posted by Panix View Post
        I already said that Intel has only had a rudimentary start/intro to Rendering - and their gpus seem to be catching up faster than AMD's.

        At least, if we are talking about discrete/dedicated hardware.
        The problem is that the OpenGL performance has remained relatively the same since Intel's launch.
        Intel isn't giving much of an indication they are looking to fix it. For DX9 they used DXVK (so they translater DirectX to Vulkan), they can do the same for OpenGL with Zinc but in both cases these are open source translation layers that need to fix Intel's performance (and perhaps a reminder that both Intel and AMD worked and still work with OpenCL, which Blender cut before ROCM was ready).



        Originally posted by Panix View Post

        Yes, we talked about it before and you use the same excuses.
        Sure excuses, not offering real uses cases where AMDs peformance is beneficial.

        Originally posted by Panix View Post
        Again, the majority of those in the industry or supporting users - i.e. sell hardware and consult - they promote Nvidia gpus. Casual users tend to pick Nvidia gpus. Only fanboys and shills push AMD gpus.
        Love this leap in logic. I give you actual performance numbers were AMD comes out on top and what situations this would apply. But somehow the people promoting Nvidia GPUs are automatically truthful (and not possibly have some form of a paid deal with Nvidia). If someone came to me and asked "what GPU should I buy" I don't just say Nvidia or AMD like a shill, I ask them what they want to use it for and can admit that AMD actually has use cases where they come out on top (not all of them and it is possible Nvidia would suite theirs better).

        Originally posted by Panix View Post
        I would like AMD to catch up or do something in this sphere - that's why I'm awaiting the benchmarks here. I suspect AMD won't be up to task - the few benchmarks or tests with AMD gpus do tend to indicate that trend continuing. The RTX 4080 Super is still outperforming anything from AMD.
        In Rasterized (so default) performance for Unreal Engine the RX7900XTX comes out above it, so not everywhere.

        The 7900XTX has dropped as low as $900 now, the RTX 4080 super had a launch MSRP of $999. Price performance is important, the 7900XTX being cheaper means that it being outperformed makes sense, of course by how much should be justified by the price difference, if your use case gives twice the performance on the RTX 4080S, the RTX 4080S would be the better choice even with the higher price.

        Originally posted by Panix View Post
        The other intangible here - is that when the AMD gpu HIP actually *works* (i.e. the render starts - doesn't crash - the program runs) - there's still reports of renders not completing or issues - perhaps, that has been fixed but it's the last thing I heard with ppl discussing current gen cards and Blender (HIP).
        Again I cannot stress this enough, because you are looking for these problems that could very well be related to user error. I addressed last time how recently Optix had an actual bug with OSL, this was an actual bug that they had to triage, this is something that affected all users that use OSL, which was apparently me as I found out the hard way and was confirmed by the Blender devs. Unlike you I don't go "Nvidia absolutely unstable for Blender!!", it's a bug, it happens and most users wouldn't even notice as they probably won't enable OSL (which is off by Default).
        There are some actual bugs with HIP, for example recently one is confirmed that if you use an AMD APU with an AMD GPU and switch between CPU and GPU mode it can messes up the GPU render (specifically textures). Similar to the Optix OSL one, not everyone is affected and here a workaround exists by disabling the internal graphics of the APU/CPU.

        But that isn't what you are saying, you are saying HIP full on doesn't work which is just disingenuous, it is pretty obvious from this that Intel + AMD GPU users aren't affected or anyone with a non AMD APU, you also do this all while referencing finished benchmarks which wouldn't make sense if HIP couldn't complete the render.
        Or are you once again conflating HIP with HIP-RT? And where are you hearing these conversations? If it is a dedicated chat/thread/forum for discussing issues with the software then again it isn't shocking that you hear these things, that's what they are for! Go to the Nvidia/Optix or Intel/Openapi version and you will find the same thing and that doesn't exclude the problem lies with the user that installed it and not the software itself.

        Comment


        • #24
          Originally posted by tenchrio View Post
          Radeon Prorender is also a finished product and so is HIP basically, the only thing that applies is HIP RT. Your fanboy side is showing again,lol.


          .....what does this have to do with anything? You argued AMD Prorender was bad because they didn't respond to every help article on their forums, my point was that Nvidia (and other major corporations) do the same.


          The problem is that the OpenGL performance has remained relatively the same since Intel's launch.
          Intel isn't giving much of an indication they are looking to fix it. For DX9 they used DXVK (so they translater DirectX to Vulkan), they can do the same for OpenGL with Zinc but in both cases these are open source translation layers that need to fix Intel's performance (and perhaps a reminder that both Intel and AMD worked and still work with OpenCL, which Blender cut before ROCM was ready).




          Sure excuses, not offering real uses cases where AMDs peformance is beneficial.


          Love this leap in logic. I give you actual performance numbers were AMD comes out on top and what situations this would apply. But somehow the people promoting Nvidia GPUs are automatically truthful (and not possibly have some form of a paid deal with Nvidia). If someone came to me and asked "what GPU should I buy" I don't just say Nvidia or AMD like a shill, I ask them what they want to use it for and can admit that AMD actually has use cases where they come out on top (not all of them and it is possible Nvidia would suite theirs better).


          In Rasterized (so default) performance for Unreal Engine the RX7900XTX comes out above it, so not everywhere.

          The 7900XTX has dropped as low as $900 now, the RTX 4080 super had a launch MSRP of $999. Price performance is important, the 7900XTX being cheaper means that it being outperformed makes sense, of course by how much should be justified by the price difference, if your use case gives twice the performance on the RTX 4080S, the RTX 4080S would be the better choice even with the higher price.


          Again I cannot stress this enough, because you are looking for these problems that could very well be related to user error. I addressed last time how recently Optix had an actual bug with OSL, this was an actual bug that they had to triage, this is something that affected all users that use OSL, which was apparently me as I found out the hard way and was confirmed by the Blender devs. Unlike you I don't go "Nvidia absolutely unstable for Blender!!", it's a bug, it happens and most users wouldn't even notice as they probably won't enable OSL (which is off by Default).
          There are some actual bugs with HIP, for example recently one is confirmed that if you use an AMD APU with an AMD GPU and switch between CPU and GPU mode it can messes up the GPU render (specifically textures). Similar to the Optix OSL one, not everyone is affected and here a workaround exists by disabling the internal graphics of the APU/CPU.

          But that isn't what you are saying, you are saying HIP full on doesn't work which is just disingenuous, it is pretty obvious from this that Intel + AMD GPU users aren't affected or anyone with a non AMD APU, you also do this all while referencing finished benchmarks which wouldn't make sense if HIP couldn't complete the render.
          Or are you once again conflating HIP with HIP-RT? And where are you hearing these conversations? If it is a dedicated chat/thread/forum for discussing issues with the software then again it isn't shocking that you hear these things, that's what they are for! Go to the Nvidia/Optix or Intel/Openapi version and you will find the same thing and that doesn't exclude the problem lies with the user that installed it and not the software itself.
          More BS from you plus the same rhetoric you used last time. I think I'll say wait until Michael tests it - both HIP and HIP-RT since there's no excuse - they should run - either the benchmarks can be completed or not. Then, go by that for starters.

          But, I will say - I posted a thread here - it was a long LONG thread of ppl WHO OWN AMD GPUS - having problems using them in Blender - and this is just one thread - I encountered a few - can't remember if I posted them all here - there was 3 noteworthy ones. They're fairly up to date. E.g.

          Hello everyone! Is there any optimistic news regarding the improvement of HIP rendering stability in the new Blender? I didn’t find any information in the summary of the upcoming version 4.1. Please. Please tell that you know something? @bsavery Is it a good time to buy cards with RDNA3 if I want to stay with Linux? This thread has been going on since 15.november 2021. it can be suspected that the problem appeared earlier. I assume that if I actually worked on it, the problem would be rather so...


          You argue that Nvidia users have problems too with Blender - CUDA/Optix, Renderers, what have you - but, it doesn't appear to be the same degree - or ppl wouldn't be recommending to use Nvidia cards to such an extent. Sure, if you are just diddling with Blender - casual work - go ahead and get an AMD gpu - since, you want it for other reasons - gaming, FOSS - if primarily Linux use. But, if serious - get a nvidia card - these ppl saying this are not Nvidia fans or anything like that - they don't like the company but even argue they're forced to pick those cards. That's what I am afraid of, too. That's why - I really want to be WRONG here - if the AMD gpus /software - HIP/HIP-RT, ROCm - has decent performance relative to Nvidia CUDA + OptiX - I will only be too glad to witness that! A 7900 xtx being in the Linux ecosystem - FOSS etc. w/ decent performance in Blender/DR/video editing etc. would be ideal but I suspect that won't happen. Also, I don't like that power efficiency isn't great compared to a 4080, for e.g. So, I'm just waiting to see what performance tests show and if Wayland/explicit sync changes things. Although, I am quite antsy to pick something (soon). ;-)

          Comment


          • #25
            Originally posted by Panix View Post
            More BS from you plus the same rhetoric you used last time. I think I'll say wait until Michael tests it - both HIP and HIP-RT since there's no excuse - they should run - either the benchmarks can be completed or not. Then, go by that for starters.
            With HIP I see little reason they won't (most benchmarks are honestly lacking for proper testing, it's been ages since I have seen a proper benchmark testing the limits like Victor with a heavy VRAM requirement and using lots of details that are compute heavy like hair and subsurface scattering).
            HIP-RT, hard to say, the delay in Blender for Linux could be related to the fact that only earlier this month it was open sourced.

            Originally posted by Panix View Post
            But, I will say - I posted a thread here - it was a long LONG thread of ppl WHO OWN AMD GPUS - having problems using them in Blender - and this is just one thread - I encountered a few - can't remember if I posted them all here - there was 3 noteworthy ones. They're fairly up to date. E.g.

            https://devtalk.blender.org/t/cycles...back/21400/589
            Okay but did you actually read them? Aside from Kazim's comment that has little to do with it (and seems to be a botched ROCM install, so user error) the crashes mentioned recently are specifically when you have 2 viewports in 1 workspace and both are set to Render mode (using Cycles).

            I tested with a friend (RX 7800XT) and if either viewport is done rendering no crash occurs (or if the the second, third and so on viewport is not set to render mode, even mat-preview worked fine). This isn't something you would encounter by default, even the animation workspace is set to solid on both viewports. It's not uncircumventable either (heck my friend wasn't aware until an hour ago), just make sure you only have a single render instance going which for the sake of performance you kinda want to (not a single tutorial would ever even instruct you to setup 2 rendered viewports), as for Eevee, this bug did not occur in Eevee, using an animated scene we opened 3 rendered viewports and Blender worked fine (FPS did drop from 24 to 7 but that would happen on any GPU).

            I get that this looks worrying but I can't remember any time I used 2 rendered viewports, I sometimes have up to 3 viewports for sculpting but all 3 will be set to solid mode.
            But if you feel like this would be your type of workflow (I would assume purely out of spite) I guess AMD is not an option.
            Originally posted by Panix View Post
            You argue that Nvidia users have problems too with Blender - CUDA/Optix, Renderers, what have you - but, it doesn't appear to be the same degree - or ppl wouldn't be recommending to use Nvidia cards to such an extent. Sure, if you are just diddling with Blender - casual work - go ahead and get an AMD gpu - since, you want it for other reasons - gaming, FOSS - if primarily Linux use. But, if serious - get a nvidia card - these ppl saying this are not Nvidia fans or anything like that - they don't like the company but even argue they're forced to pick those cards. That's what I am afraid of, too. That's why - I really want to be WRONG here - if the AMD gpus /software - HIP/HIP-RT, ROCm - has decent performance relative to Nvidia CUDA + OptiX - I will only be too glad to witness that! A 7900 xtx being in the Linux ecosystem - FOSS etc. w/ decent performance in Blender/DR/video editing etc. would be ideal but I suspect that won't happen. Also, I don't like that power efficiency isn't great compared to a 4080, for e.g. So, I'm just waiting to see what performance tests show and if Wayland/explicit sync changes things. Although, I am quite antsy to pick something (soon). ;-)
            I get that but you aren't exactly listening to me when I say Blender is more than just the CUDA/HIP performance in Cycles.
            Nor are tech journalist exactly known for proper benchmarking (the years of Ashes of the Singularity benchmarks are proof to that, or now the Blender Openbenchmark with extremely questionable results even with Nvidia as I detailed earlier or god forbid the times they use Blender BMW on GPU and call it a day). I have explained before how some use cases within Blender don't ever see Rendered mode and viewport performance is more important. But you are having an extremely hard time admitting that those benchmarks or use cases exist. At least be specific that you are refering to Blender Cycles performance and not Blender performance overall, it is a disservice to the versatility of Blender that it is bogged down to just the CUDA/HIP performance of its Cycles Render Engine.

            Considering you want to video edit you probably want to use Blender for animation and not still renders (single frame), correct? In which case you are more likely to use Eevee.
            This isn't something that is "BS", this is something that is often repeated by animators using Blender:
            This is a question that has been asked since the inception of the Eevee render engine back when Blender version […]

            This article compares both Blender render engines Eevee vs Cycles. We will show several examples to illustrate the differences between both engines.

            Choosing between Eevee and Cycles really boils down to what you are trying to achieve with your project. Understand the basics of these render engines, compare their pros and cons, and pick the best option for rendering in Blender with this article.


            The only downside is the realism factor which will be less with Eevee-Next while maintaining the incredible speed of Eevee (which was supposed to come with Blender 4.1 but has been delayed to 4.2 but looks very promising):
            Read on Eevee Next as it elevates real-time workflows in Blender with significant improvements, from ray tracing, to vector displacement and improved shaders.

            Dynamic VFX Pack (Free Sample Pack): https://blendermarket.com/products/blender-dynamic-vfx---elemental-asset-packCrafty Asset Pack (Free Sample Pack): https...


            But even still people have made impressive things in Eevee, someone even converted the classroom benchmark from cycles to Eevee and with it managed to get a render time of 8 seconds in 2019 (a render time that would only be rivaled in Cycles 4 years later by only the RTX 4090 using Optix) :
            I will describe my general process in converting Cycles scenes into Eevee. Here are my main tips that I use to get started: For proper setting for shadows see link on Light Leaks. Lighting workflow see HDR lighting. Multiple or nested IRVs see link on Nested IRVs. Do a Cycles render pay close attention to the lighting specially the shadows. Shadows can be sometimes difficult to create accurately in Eevee but are critical for realistic rendering. Lastly material differences like glas...


            Other render engines once again exist (like AMD Prorender) that offer more realism than Eevee but still faster render times compared to Cycles (it is why Unreal Engine is also a popular tool in VFX, which again Blender has great interoperability with and in such a workflow you wouldn't even switch to Cycles in Blender just Eevee for the modeling, texturing etc etc).

            And if you want FOSS, there is just no other way than to go AMD. I find it weird you bring that up but still consider Nvidia. If the worry is that AMD can't do the things you mentioned (Blender/DR/Video editing, btw Blender also has a build in video editor), it can but it will depend heavily on your use case whether or not it will out perform the equivalent Nvidia card (sometimes it does, sometimes it doesn't) but they do tend to work and I know plenty of people with AMD cards that use Blender and never once mentioned crashes (safe for their own fault like fat fingering the subdivision value) so as long as you don't do anything out of the ordinary you will be fine.

            If you still want the 4080, I have said before the 16GB can be an issue (if only they upped the super to 20GB). As always it depends on your use case (as it has with AMD so it does with Nvidia), if you don't render at 4K with similar quality textures or higher you will be fine, but certain render speed up tricks like Baking or Persistent data might be out of the question depending on the scene (again it really depends on what you make and how you make it and VRAM is a lot harder to compensate for than time and low VRAM can lower render time, I gave you the video before where an RTX 3070 lost to the 3060 12GB due to the lower VRAM by quite the margin and I have personally had quite the amount of "Out of Cuda Memory" in my 1080TI days so trust me when I say, it is a pain to fix VRAM shortage especially when you thought you were fine). And I would still advice you to look towards the RTX 4090 if you can (again might not have to, figure out your use case in Blender first since it can heavily affect what your requirements are, low VRAM scenarios with high render times exist but I don't know what you want to make, you never even say it, you just go on and on about Cycles performance calling it Blender performance while I need to keep bringing up Blender is more than just Cycles rendering and sometimes Cycles isn't even necessary to use, if you truly gave a shit give me an example what you actually want to make in Blender).

            Comment

            Working...
            X