Announcement

Collapse
No announcement yet.

AMD Quietly Funded A Drop-In CUDA Implementation Built On ROCm: It's Now Open-Source

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by duby229 View Post

    So I do have to admit to you that I don't even have a modern nVidia card to experiment and play with rn... So I'll just give that much up, I don't have any way of actually testing optix out for myself on my own hardware.

    It's not a money issue, its a desire issue for me.

    And yes the general consensus seems to be that optix produces much worse quality output. If quality isn't a concern but render times are then optix does seem to have some amount of value in that regard.
    Can you present any sites - discussions/chats or reviewers that explicitly express that view/perspective? It's the first I've heard of it.

    I think it's quite possible that AMD is a good option if you can keep using ZLUDA - although, who knows what will happen in the future or if that project will continue.

    It looks promising if all one needs is CUDA or a good alternative. But, from what I've read - optix improves render times significantly and I didn't get the impression that quality is ACTUALLY WORSE than if one used HIP-RT (assuming, it at least works 'unofficially' in Windows - even if AMD claims it is 'a release' for Windows - it is not acknowledged by the OpenBlender website - which does accept samples using ZLUDA).

    We review the Nvidia RTX 4070 Ti SUPER in various CG-related Workloads, such as Rendering in Octane, Redshift, V-Ray, or Content Creation Apps such as After Effects and Premiere Pro.


    "That said, we should have a more exciting contest once we see HIP-RT implementations in the Blender’s stable version (currently experimental). As it stands, Radeon sadly isn’t a good idea for any rendering workload because ray acceleration is an experimental feature, and artists have complained of crashes and ‘artifacting’ when enabling it."

    Nvidia's RTX 4080 SUPER GPU Refresh is put through content creation and rendering benchmarks in our review to find out if its performance is worth investing your money into.


    Attendees Brecht Van Lommel (Blender) Thomas Dinges (Blender) Nikita Sirgienko (Intel) Xavier Hallade (Intel) Stefan Werner (Intel) Attila Áfra (Intel) Patrick Mours (NVIDIA) Brian Savery (AMD) Christophe Hery (Meta) Notes OpenImageDenoise: Stefan implemented a solution for CUDA primary context retaining New release with CUDA driver and Metal support is expected next week, just in time for 4.1. Using a git tag or hash is fine for us, which means it might be available sooner as we do not have...

    "HIP-RT support on Linux is not going to be ready for 4.1. It is waiting for the HIP-RT library to be open sourced."

    In the realm of 3D rendering, two titans stand tall: AMD and NVIDIA. Both companies have carved their niche in the industry, offering a diverse range of


    NVIDIA has released the SUPER variants of their RTX 4080, 4070 Ti, and 4070 consumer GPUs. How do they compare to their non-SUPER counterparts?


    Learn how to easily denoise rendered images using the standalone command line version of NVIDIA Optix Denoiser. Say goodbye to grainy images and hello to a cleaner and more professional look. No rendering required!


    I haven't been able to find articles or discussions of 'bad quality' of optix denoising in Blender.

    Comment


    • Originally posted by Panix View Post
      LMAO. Yes, that's why Puget Systems who actually configures systems for Blender (being one program among others) only includes Nvidia configurations. You're a funny guy, Q - almost as amusing as your troll buddy, tenchrio.
      I discovered that Puget Systems is a website run by evil deep state operatives in the moment their web server was used to attack my computer and install a trojan horse.

      for me nvidia is a deep state run company and of a computer store only sells configurations with Nvidia then of course it is a deep state operation to scam people.

      "You're a funny guy, Q"

      Your laughter will get stuck in your throat and you will choke on your own spit.​
      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • Originally posted by Panix View Post
        Liar. I posted several links and cited various websites on here. Of course, you post none here. Also, you keep going on tangents with stuff I already conceded - like Nvidia's lack of vram on many gpus - I mentioned this and chastised Nvidia for their bs - I am not a Nvidia fan - I know their business policies and practices are bad - I am looking mostly at used gpus and made it clear that I am mostly discussing the tech. and comparing. I also made it clear that it's preferable to get as much vram as possible - I sold a 3080 10gb gpu - since I wanted more vram - just in case. I don't upgrade often so I am taking my time to decide. I also wanted to adhere to open source if possible so I do want AMD to have good performance - they just don't in the areas I want to use the gpu - I think gaming is a toss up - and a coin flip - I don't care about the upscaling/ray tracing in games - whether dlss is better than fsr - some nvidia and amd ppl who mostly concentrate on gaming will argue there.

        My debate topic I am concerned with is here w/ rendering/gpgpu compute/video editing - maybe AI/ML - but, you couldn't cite anything with Optix except one issue. It's severe - okay, noted. But, HIP/HIP-RT has a gazillion. LOL!

        That Optix issue doesn't stop ppl from prefering/picking nvidia gpus over AMD's for Blender. You must have done different research in another universe.
        LMAO here you go again. No unlike you, I know my use case. I know it would now benefit from AMD over Nvidia (at least from a value perspective, again get a 4090 if you are so worried). I tend to gravitate towards NPR with lots of particles and simulations (take a mesh make it explode with the Cell Fracture add on and a force emitter, fun stuff) . When I do use cycles I screw around with render passes and low sample rates to get intentionally unrealistic results, there is some incredibly stunning results (check out paul o caggegi inkspot, that is what inspired me) you can get with it, but some already took 2 seconds to render on CUDA back on my GTX 1080 TI so I doubt it would benefit from Optix as much as just plain from the performance boosts GPUs already got naturally.

        Sometimes I make game assets or vr-avatars for friends and don't need to render at all (only the material viewport to check if my UVs are fine, material setup happens in Unity anyway). I own a 3D Printer (Artillery Sidewinder, nice and big) and my best friend is into tabletop, I never texture what he wants me to make so all that would matter is viewport performance. AMD would have been fine here, I even did it at a LAN party once and experienced 0 issues (I believe they had a 6900XT, I only brought my laptop and it was for some quick models for a game they were making).

        The difference is that I already posses a vast amount of knowledge and even experience on the subject, you are literally looking for negatives to convince yourself you shouldn't go AMD, find them and then pretend as if that means AMD is just not an option at all, problems exist with Optix too, it is possible you may not experience them (OSL isn't used by everyone for example) or perhaps maybe you will. Or maybe what you create is closer to the Whitelands Render in the Techgage benchmarks and optix will improve the speed by only about 5% over Cuda (something you keep ignoring as well, Scanlands sees the greatest increase, I might check tomorrow after work why that is but in the other 2 benchmarks Optix impact is considerably less especially for the high end cards and as stated before the output is a tad different, examples exist where this is even noticeable by the human eye). Obviously I am not going to name every issue with Optix, but it is clear you are looking for HIP issues on purpose, of course you would find many "lol".

        Did you ever open Blender, did you ever render?
        Do you even know if you would render with Cycles or Eevee? Did you ever even touch that setting to begin with (because it defaults to Eevee)?
        Hell if truly all you care about is speed than Eevee is king, what takes a minute in Cycles, takes a second in Eevee (but the output will look quite different). Eevee also doesn't have a separate AMD and Nvidia implementation it uses OpenGL for both and 1 day with Eevee-Next Vulkan (man kinda weird how this one is used by all 3 GPU makers but you don't blame AMD over the Blender foundation on it not being finished or optimized despite the fact it would bring RT acceleration to Eevee and has been late for 2 Blender versions now, or could it be that developing this stuff is hard and takes time and no amount of whining will speed it up).

        What are your VRAM requirments going to be? Are you going to use 4K textures or procedural materials? Will you be looking to texture paint? Will you bake normals, color, AO etc to increase render times (but up VRAM)? What is the output size? All these things matter, an RTX 4080 only has 16GB and for a god forbidden reason a 256 bit bus meaning it's theoretical bandwidth is 716.8 GB/s which is lower than the much cheaper RX 7900XT with its 20GB 320bit bus and bandwith of 800GB/s which in theory can negatively effect the render time under a shared memory workload (so Data has to swap from system RAM) even more for the RTX 4080 which also would have to rely on this faster and I linked to it before the difference in render time doubled for the 3070 over the 3060 12GB (which note is a weaker card by default meaning you lost more than double the render time from the 3070's perspective), so there goes even the optimisitc performance advantage of Scanlands (which renders on an RX6500XT with 4GB and still outspeeds an Arc A380 without RT, so chances are nothing here is baked and textures are low res and reused) and you are more likely to OOM not being able to render at all.

        People tend to go with Nvidia over AMD overall, and the RTX 30 line is a prime example how that wasn't necessarily the right choice for some users, most people bought it for video games and now there are constant examples why buying an RX6800 over an RTX 3070 would have been better for that use case, also relating to the VRAM.
        Some people use Windows for AI, despite the fact WDM2 still only allows the allocation of 81% of VRAM to CUDA applications as it has been doing since its introduction with Windows 10 (most popular desktop OS by the way) and was the reason I went full on Linux (yes Blender was affected). Popular doesn't mean better.

        If you want to combine AI with Blender with for instance AI Render/ stablediffusion plug in, I can guarantee you the second you load in lora's and textures your VRAM will go by quick. I once tested it with the combination of an SDXL checkpoint, with about 2 lora's in a relatively simple scene on a 4090 and managed to make the system unusable as the PC only had 32GB of normal RAM (when I tested a second time I started killing the Stablediffusion the instant I saw the swap just filling up like crazy).

        Again go for a 4090, best thing, just expensive. 3090 great if you can get a deal (do watch for out for cryptominers). Some complain about the installation of ROCM being complex, I laugh as I red the install instructions and wonder how people fumble copy paste. There are some cases (Eevee and viewport) where the Rx7900XTX outspeeds even the 4090. But like you're just gonna have to live with it, either choice you make, the consequences only matter if/when they apply. Maybe by the time HIP-RT matures you never used Optix and were experimenting with AI, Eevee or perhaps even Luxcore. Maybe the viewport performance never makes a noticeable drop.

        Maybe you never hit 16GB of VRAM and the RTX 4080 would have been just fine even faster than the 3090. Maybe everything you do can be done on a 4060 TI and the performance difference wouldn't have even mattered. There are people that for instance enjoy making low poly renders in ortographic perspective and I swear I have seen making nothing else, the integrated graphics on a laptop would suffice for them. The problem is you don't know your actual use case, spot 1 negative, make outlandish claims based on 0 experience and want every one of us to accept it as gospel. The 7900XTX has its set of upsides in Blender, it just depends on the user (to which you admit not being one yet somehow pretend as if you base your opinion on).

        Comment


        • Originally posted by Panix View Post
          LMAO. Yes, that's why Puget Systems who actually configures systems for Blender (being one program among others) only includes Nvidia configurations. You're a funny guy, Q - almost as amusing as your troll buddy, tenchrio.
          Yet they also lack RTX 4080 super and 4070 ti Super cards, almost as if they have a stock they first want to get rid of.
          Also a quote from the Pugetsystems benchmark of the 7900XTX at the end:

          Overall, there are a few workflows where the AMD Radeon RX 7900 XTX is very strong, but there are even more areas where we either encountered what are most likely driver or application bugs, or where NVIDIA is simply the better choice. Because of this, we still give NVIDIA the “best GPUs for content creation” crown, but if your workflow lines up with AMD’s strengths, the Radeon 7900 XTX can give you better performance than NVIDIA – and at a lower price.
          And crazy that isn't that what I have been saying. Buy a 4090 else weigh your options, before you mention their Blender benchmark I like to point out it pales in comparison to techgage's deep dive (which is more recent but we still lack the most recent Blender Version) and still depends on what you wish to do also crazy line from the techgage article "Not every project will magically show a massive rendering speed gain from RT acceleration," and "With Eevee, NVIDIA still proves the fastest overall, but depending on the project, AMD can perform really well, too.", man crazy all these quotes.
          Calling you a troll at this point is too much of an honor, it would imply you actually have the intelligence to pull of malice.

          Comment


          • Originally posted by tenchrio View Post

            Yet they also lack RTX 4080 super and 4070 ti Super cards, almost as if they have a stock they first want to get rid of.
            Also a quote from the Pugetsystems benchmark of the 7900XTX at the end:



            And crazy that isn't that what I have been saying. Buy a 4090 else weigh your options, before you mention their Blender benchmark I like to point out it pales in comparison to techgage's deep dive (which is more recent but we still lack the most recent Blender Version) and still depends on what you wish to do also crazy line from the techgage article "Not every project will magically show a massive rendering speed gain from RT acceleration," and "With Eevee, NVIDIA still proves the fastest overall, but depending on the project, AMD can perform really well, too.", man crazy all these quotes.
            Calling you a troll at this point is too much of an honor, it would imply you actually have the intelligence to pull of malice.
            That's just them being polite. You're an annoying troll and not worth replying to anymore.

            Edit: Puget configures a Blender system with all Nvidia desktop cards including an option for a crappy 4060 - no AMD gpu option at all. Out of workstation cards, 9 Nvidia and only 2 AMD so I don't even know what point you had - you don't make sense, EXPERT. lol! Expert troll, for sure.
            Last edited by Panix; 19 February 2024, 02:31 PM.

            Comment


            • Originally posted by Panix View Post
              That's just them being polite. You're an annoying troll and not worth replying to anymore.

              Edit: Puget configures a Blender system with all Nvidia desktop cards including an option for a crappy 4060 - no AMD gpu option at all. Out of workstation cards, 9 Nvidia and only 2 AMD so I don't even know what point you had - you don't make sense, EXPERT. lol! Expert troll, for sure.
              Like how delusional are you?
              If they have 2 AMD workstation cards and don't offer newer Nvidia cards then anyone with room temperature IQ would understand that they are working with a backlog most likely ordered wholesale. Basic economics, why else wouldn't they update their offerings with the RTX 4070 ti super despite praising it in their review of it?
              Also how about a larger sample size? Digitalstorm also makes workstations and guess what the R7900XTX is an option there. Same for Origin PC, same for Ava.
              But of course that all doesn't matter, you're an Nvidia fanboy and need to latch to any excuse that AMD just cannot ever be an option (and my god do you keep digging a hole and lowering the bar, the Chinese have been asking who is entering their ground space).

              It is clear you don't have an interest in Blender or AI as you already shifted the goalpost from actual performance to the offerings on a single workstation provider (which even recommends the GPU in a review but even here you pull excuses straight out of your ass with no evidence, only your delusional). You're finally right on one thing, no point in replying, I have said everything all ready and have been repeating it ever since, hell what I say is even in the articles I reference and now you pretend to reference as if you did any actual research. Buy whatever GPU, but I have higher hopes of HIP-RT seeing stability than you even starting Blender up let alone pressing F12.

              Edit: I also never called myself an expert but I guess compared to you, people who completed the infamous Donut tutorial could even be called that.
              Last edited by tenchrio; 19 February 2024, 04:34 PM.

              Comment


              • Originally posted by tenchrio View Post

                Like how delusional are you?
                If they have 2 AMD workstation cards and don't offer newer Nvidia cards then anyone with room temperature IQ would understand that they are working with a backlog most likely ordered wholesale. Basic economics, why else wouldn't they update their offerings with the RTX 4070 ti super despite praising it in their review of it?
                Also how about a larger sample size? Digitalstorm also makes workstations and guess what the R7900XTX is an option there. Same for Origin PC, same for Ava.
                But of course that all doesn't matter, you're an Nvidia fanboy and need to latch to any excuse that AMD just cannot ever be an option (and my god do you keep digging a hole and lowering the bar, the Chinese have been asking who is entering their ground space).

                It is clear you don't have an interest in Blender or AI as you already shifted the goalpost from actual performance to the offerings on a single workstation provider (which even recommends the GPU in a review but even here you pull excuses straight out of your ass with no evidence, only your delusional). You're finally right on one thing, no point in replying, I have said everything all ready and have been repeating it ever since, hell what I say is even in the articles I reference and now you pretend to reference as if you did any actual research. Buy whatever GPU, but I have higher hopes of HIP-RT seeing stability than you even starting Blender up let alone pressing F12.

                Edit: I also never called myself an expert but I guess compared to you, people who completed the infamous Donut tutorial could even be called that.
                More insults again. Great.

                I am pretty sure I've done quite a bit of research, thanks. But, let's look at your statements and argument, first, okay? I cited Puget since their main business is productivity software - not just Blender.

                You chose to point out sites that are either Gaming AND or 'everything' - which is fine but I chose Puget for a specific reason. Also, when one goes to your examples - you quickly notice/recognize that they ALSO promote/sell an overwhelmingly Nivida-centric gpu configuration of their desktops including workstations. So, I really don't know why you were saying I was wrong or something. I am only going by what I READ.

                To summarize your other argument(s): 1) why else wouldn't they update their offerings with the RTX 4070 ti super despite praising it in their review of it? - I think that's redundant - they have switched - as most of their offerings were Geforce 30/Ampere before - I think the 4070 Ti Supers will be offered in time. I don't know how that has any impact on them offering Nvidia - if AMD made more strides in Blender - then I suppose they'd get some 7900 xtx cards. The final builds are in the thousands of dollars - I think it's not difficult for them to come up with new cards.

                2) "But of course that all doesn't matter, you're an Nvidia fanboy and need to latch to any excuse that AMD just cannot ever be an option" - that's not true and I already pointed out the flaws of your argument listing those sites. I sold my 3080 10gb - since, I thought it's better to have more vram for what I want to do - and I thought 10gb might be okay at the start - but, I thought it was a good time to upgrade/get something with more vram so I don't have to wonder if I'll need more (eventually). I am/was always open to AMD - since, I want to use Linux for it eventually - and the two programs I will use both have Linux versions - but, I plan to use Windows first to get up to speed and because, I think - even with Linux versions, they probably will have some obstacles or issues - and it sounds like that's a correct assumption. Regardless, I do WANT and plan to try the Linux versions. I was originally considering the 3090 and it's still on my radar. However, I prefer to have current gen gpus and the lower power/temps that they should have is more appealing - since the (used) price isn't much more than a used 3090 - we're talking used 7900 xtx, 4070 Ti/Super and MAYBE 4080 (not the scalpers').

                3) "It is clear you don't have an interest in Blender or AI as you already shifted the goalpost from actual performance to the offerings on a single workstation provider" - since, you're already incorrect and inaccurate on the latter part, I'll only address the first part. You're still inaccurate on the 'actual performance' - my point stands. Most ppl working with Blender - both general users, youtubers/reviewers, bloggers etc. - all of them recommend Nvidia - and I proved this already - you just choose to ignore it. Yes, they mention AMD but their ultimate conclusion is to point the reader towards an Nvidia card - the workstation businesses have Nvidia (majority gpu choices) in their PC configurations. It's not rocket science but you continue to make false and inaccurate accusations towards me - which just makes you look like a jerk and an arrogant troll.

                I still haven't ruled out an AMD gpu - but, it has to be either a 7900 xt and 7900 xtx - and I suspect it will have the issues that most ppl talk about - which I have mentioned numerous times. The ZLUDA thing is very interesting, though. I have discovered that the Optix denoiser doesn't 'look good' - but, that is rarely pointed out by reviewers (as a reason to avoid Nvidia gpus or to find fault with them) so it seems like an 'anti-Nvidia' perspective (again). If you and your buddy can find more 'experts' discussing this as a good reason to pick AMD instead, I'm open to this point.

                The other reason to consider a 7900 xtx is to get - hey, guess what, more vram - 24gb would be enough, I'm sure - and having more is better than not having enough. Again, why I decided to sell the 3080 - plus, I knew the used values would come down. I will also use video editing software and it's better to have more vram for that, too, right?

                I was going to like your previous post - as it wasn't as insulting but then your latest one was quite negative so I changed my mind. This is a sample of the Blender pages discussing vram:
                Our take on Blender's official System Requirements will tell you what kind of Hardware you'll need for your specific Blender Workloads.

                Video RAM, or VRAM, stores graphics-based data such as images and videos during and after processing.

                "But you will need higher VRAM to create high-poly 3D objects and complex scenes, realistic animated characters or movies, and perform image and video rendering. The best option is to have 10 to 16 GB of Video RAM for these purposes."

                However, to complete more difficult activities such as sculpting at a high resolution, running sophisticated simulations, or rendering enormous scenes, you will need at least 8 GB of RAM. For serious 3D artists working on enormous projects, graphics processing units (GPUs) with 16 GB or 32 GB of video memory (VRAM) are perfect.
                Simply put, the more video random access memory (VRAM) you have, the smoother and quicker Blender will run while managing sophisticated 3D drawings; therefore, it is preferable to match the amount of VRAM you have with the scope of your creative goals."​

                Comment


                • Originally posted by duby229 View Post
                  I think the point is actually that the quality of Optix rendering is very subpar. Yes it seems faster but actually it isn't doing what other codepaths are doing.

                  You can ignore the fact that Renders with Optix look like ass all you want.

                  You do -NOT- get better performance, actually what you get is much worse quality.

                  ...
                  Yes, OptiX denoiser is very useful for quick preview renders / ipr, but not for final animations, because it has no temporal stability, maya/arnold here.

                  Comment


                  • Originally posted by Panix View Post
                    More insults again. Great.
                    You literally started it and always re-escalated, you get what you deserve
                    If you had just asked nicely from the start "Hey, I am looking for a new GPU but have my doubts about AMD, could someone (or 'you' if addressed to me) list the benefits and downsides of both sides? I am looking to do Blender, HIP-RT doesn't seem finished or close." this conversation would have gone quite different, but your first response is to call me insane in the most childish way possible and claim the GPU has no use outside of gaming (something you have said a lot until recently by the way).

                    Originally posted by Panix View Post
                    I am pretty sure I've done quite a bit of research, thanks. But, let's look at your statements and argument, first, okay? I cited Puget since their main business is productivity software - not just Blender.
                    I first brought up the Puget benchmark to point out the importance of VRAM in a different thread about Blender 4.1 CPU performance at 12 February 2024, 06:46 PM, partly addressed to you (and as you started this entire thing by responding to something that wasn't addressed to you at all I think it is pretty obvious you would have read it).

                    In October of 2023 you posted "AMD has no support for productivity use." in the Blender 4.1 RDNA3 APU thread, but the 2022 Puget benchmark shows some clear value for it in DaVinci Resolve. Even in Adobe Premiere it sometimes scores a win and shows for its price that it can be considered an option for those 2 productivity use cases.
                    You still stood behind the only gaming sentiment in this very thread at 13 February 2024, 07:25 PM calling AMD gaming card (with no remark on the video editing, demanding they lower their price).
                    The first time you even mentioned "Puget" in your entire Phoronix post history is in this thread at 14 February 2024, 03:34 PM, and still you didn't give AMD credits for their wins in that benchmark, so I ain't falling for it chief.

                    And even then I had to later point out how you misconstrued the Lora article as you didn't actually read it (charts are tempting after all) so you weren't even aware that Gradient Checkpointing is something you generally want to avoid as it tanks performance but allows you to run on lower VRAM like the RTX 4080 which fell of pretty quick on SDXL, if you truly are as objective as you say you are and actually did research by reading the articles you would have pointed any of this out but that is not what you did. Face it, you were of the opinion that the RX7900XTX was not a choice outside of gaming, your post history reflects this and Phoronix's search on post history makes this easy to prove. Even after I started linking the articles it took a while for you to even start considering the AMD cards as non-gaming possibilities.

                    Originally posted by Panix View Post
                    You chose to point out sites that are either Gaming AND or 'everything' - which is fine but I chose Puget for a specific reason.
                    No, you doubled down on the site's GPU offerings not even the reviews, just pointing to their active stock where I had to point out the lack of newer Nvidia cards despite them praising the RTX 4070 TI Super in multiple regards, also I had to point out that Puget actually had some praise in the content creator review of the 7900XTX, over a year back when even I would not have been on board. And gaming? The other site I brought up was Techgage, a site that has some pretty good indepth reviews on Render Engines in general. But the Blender Deepdive doesn't fit your narrative so once again you ignore it, you still do, you now agree more VRAM is better but still can't admit the actually impressive viewport and material preview performance AMD has in Blender which I have pointed out has its use. Or do you mean the prebuilt websites I mention that also make gaming PCs? So what, most of them also make Workstations, they serve a bigger market so they have more versatility for their components, they still list the AMD cards under their workstations as well, this reasoning is once again an excuse to the many you have now accumulated.

                    I don't even need to respond to the rest, if you aren't a troll then it is just getting sad. You're making up lies of your own opinion while you lacked any nuance when presenting it before, so it has been blatantly obvious where you stood before this back and forth started. Most people recommend Nvidia because that has been the norm, I have talked about the history of HIP and Optix and even how it has only been very recently that for instance the 7900XT has overtaken the RTX 4070 in some Cycles benchmarks, 2 years ago everyone, myself included, went with Nvidia because HIP (hell even in Eevee and the viewport, RX 6000 cards were slacking) just wasn't nowhere near there yet, mind you, this was also when Optix wasn't the standard yet but was showing a lot of promise, but now, today, this instance is much different and with AMDs lower prices, better RAM offerings and even wins in certain benchmarks they should not be left out of consideration especially depending on the appropriate use cases that don't always benefit as much from Optix nor HIP-RT in the future.

                    I have already said everything there is to say and if you are considering the RX7900XTX now, good on you. Just read back my previous comments, consider what you wish to make, if anything figure this out before buying a new GPU, maybe the 4070 ti Super or 7900XT 20GB fulfills your needs and you can spend the rest of the money on Blender Market (they have a sale that started today, great timing) to buy addons to speed up your workflow a lot more then RT-acceleration or even the difference between most of those cards (I spent well into the triple digits just so some things that took well over 30 minutes could be done in half the time with sometimes better results).

                    Comment

                    Working...
                    X