Announcement

Collapse
No announcement yet.

Former Nouveau Lead Developer Joins NVIDIA, Continues Working On Open-Source Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by oiaohm View Post

    The RX7800TX is a 16GB card RX7900RE 16G card. and the 4070Super is 12G card. Guess what running ram consumes power. The power difference between 4070 Super and RX7800TX and RX7900GRE is mostly ram.

    4070 Super is not cheaper everywhere. That 4G of ram difference can be the difference if a compute workload works well or not. 16 G ram card is a lot more expensive from Nvidia.

    Linux users like me build systems I run for at least a decade. AMD I will have driver updates for that time frame. Nvidia not so much. So do more Nvidia matter if in 5 years time I am going to have replace the Nvidia card because it drivers don't want to work with the current X11/Wayland solution of Linux by then or by the AMD card that going to be good for the 10+ years.

    This is a factor of long term performance is driver support. It common for AMD developers to find something wrong current driver and see that the old driver had the same problem on Linux and fix that has well.

    Me building a system to run for a decade I am compare the 7900GRE against the 7800TX the 4070 super from Nvidia does not come into it.
    I hear you but I am talking about the overall superior card - both software-related and hardware-related and afaik, the Nvidia side is better when comparing the 7800 xt and 7900 gre - if you want more vram, then, sure, you will have to pay for it - that's where Nvidia are 'bastards' - the 4070 Ti Super - has 16gb but power consumption is still more efficient - so what about your argument, huh? 288w vs 304w
    The ASUS Radeon RX 7900 GRE TUF OC comes with a dual BIOS feature and a premium all-metal cooling solution that runs whisper-quiet. The TUF also offers adjustable RGB lighting, and cooling performance that's among the best of all the GRE cards that we've tested.

    Power spikes and vsync 60 hz - major power differences there, too - with the gre consuming even more power than its 7900 'siblings.' Wow.
    It's also better for gpuCompute and other productivity tasks (with a few exceptions):

    The FOSS aspect is the ONLY reason to even CONSIDER the amd gpu - and if explicit sync 'solves' some issues - then there's even more incentive to pick the nvidia card -whichever it is. I'm waiting to see if it does - as AMD doesn't look like it's fixing any of the productivity software problems - not even close or anytime soon so I have no choice.
    Last edited by Panix; 30 April 2024, 05:25 PM.

    Comment


    • Originally posted by Panix View Post
      I hear you but I am talking about the overall superior card - both software-related and hardware-related and afaik, the Nvidia side is better when comparing the 7800 xt and 7900 gre - if you want more vram, then, sure, you will have to pay for it - that's where Nvidia are 'bastards' - the 4070 Ti Super - has 16gb but power consumption is still more efficient - so what about your argument, huh? 288w vs 304w
      https://www.techpowerup.com/review/a...re-tuf/39.html
      You need to look at that again. There is a stock RX7900 GRE in there list. The ASUS is overclocked.

      There is another thing DisplayPort 2.1 is on the AMD cards. Where as the Nvidia is only DisplayPort 1.4a​. This is important for hooking up some monitors. Particular thinking you need Displayport 2.1 to use a Display-port to hDMI 2.1 dongle that works well.

      Less ram worst connectivity you have with the 4070 Ti Super. Remember I said my target it to use card for 10 years. Poorer connectivity is a big factor.

      Comment


      • Originally posted by Panix View Post
        I hear you but I am talking about the overall superior card - both software-related and hardware-related and afaik, the Nvidia side is better when comparing the 7800 xt and 7900 gre - if you want more vram, then, sure, you will have to pay for it - that's where Nvidia are 'bastards' - the 4070 Ti Super - has 16gb but power consumption is still more efficient - so what about your argument, huh? 288w vs 304w
        The ASUS Radeon RX 7900 GRE TUF OC comes with a dual BIOS feature and a premium all-metal cooling solution that runs whisper-quiet. The TUF also offers adjustable RGB lighting, and cooling performance that's among the best of all the GRE cards that we've tested.

        Power spikes and vsync 60 hz - major power differences there, too - with the gre consuming even more power than its 7900 'siblings.' Wow.
        It's also better for gpuCompute and other productivity tasks (with a few exceptions):
        Man if only there was a Linux site for this Linux centric forum that did Benchmarks and showed the 7900GRE having better power consumption and performance per dollar on Linux compared to the RX 7900XT(X) and 4070 TI super. Oh wait.
        Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


        And the peak power is even lower compared to the benchmarks you provided, hot damn.
        Also isn't it hilariously that a benchmark you actively commented on is one you don't link since it of course doesn't fit your narrative.

        Originally posted by Panix View Post
        The FOSS aspect is the ONLY reason to even CONSIDER the amd gpu - and if explicit sync 'solves' some issues - then there's even more incentive to pick the nvidia card -whichever it is.
        Gaming performance, Vulkan compute, more RAM, video editing in both Davinci Resolve and Adobe Premiere Pro, 3D sculpting and modeling while using a rasterized based engine to render like Eevee or Beer. Seems like there is more than just the FOSS aspect.

        Originally posted by Panix View Post
        I'm waiting to see if it does - as AMD doesn't look like it's fixing any of the productivity software problems - not even close or anytime soon so I have no choice.

        Lol as if you would every do anything with productivity software. Your only understanding of it comes from Benchmarks and even there you need to cherry pick whatever floats your boat, for example you don't bring up that currently the Nvidia cards have had a performance regression in Blender 4.0. And while no one seems to have done this for Blender 4.1, the Opendata benchmark site would suggest that Nvidia cards regressed even further with 4.1 while AMD actually saw an increase in their score.

        So oh no, terrible news right? Don't use Blender 4.1 with Nvidia! Except no, since Blender 4.1 comes with a ton of features that makes the performance regression in those benchmarks negligible, one of my favorite ones being Geometry node Baking which has doubled the performance in a lot of my scenes for both Cycles and Eevee with the click of a button. And if you actually learned anything from my previous explanations on VRAM to you before, you might remember that Baking is where you precalculate something increasing VRAM but also increasing performance which has now been added to Geometry nodes.

        Meanwhile an absolute madman on the Blender subredit used the new Intel Open Image Denoiser's GPU acceleration that came with Blender 4.1 to render a video in Cycles at 5FPS by lowering the sample count to 4 with an RTX 4070 TI super. So while the RTX 4070 TI super also saw a 10% drop in performance in the Blender benchmark since 3.6, it has better actual rendering performance when actually rendering as OIDN is on by default and now is significantly faster due to GPU acceleration being added.
        As a small side node the denoiser also uses VRAM, and the tooltip even warns you that the bigger the scene the more VRAM is required. From my own testing the OIDN still uses less VRAM than the Nvidia Optix denoiser while offering better results and to top it off it works on all 3 GPUs unlike the Optix denoiser.

        But neither of these performance gains are visible on the benchmark, as most of the benchmark scenes used by opendata are older and still use particle systems where these days tutorials would suggest and use geometry nodes and OIDN is technically post processing to get a clearer image from lower samples (in the above case single digits which is pretty nuts) while the open data benchmark score is the estimated number of samples per minute, summed for all benchmark scenes. So while the card now produces less samples in the latest Blender version, it still actually renders faster not to mention a ton of other great features like Light Linking.
        Last edited by tenchrio; 03 May 2024, 07:40 PM.

        Comment


        • Originally posted by tenchrio View Post

          Man if only there was a Linux site for this Linux centric forum that did Benchmarks and showed the 7900GRE having better power consumption and performance on Linux compared to the RX 7900XT(X) and 4070 TI super. Oh wait.
          Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


          And the peak power is even lower compared to the benchmarks you provided, hot damn.
          Also isn't it hilariously that a benchmark you actively commented on is one you don't link since it of course doesn't fit your narrative.

          Gaming performance, Vulkan compute, more RAM, video editing in both Davinci Resolve and Adobe Premiere Pro, 3D sculpting and modeling while using a rasterized based engine to render like Eevee or Beer. Seems like there is more than just the FOSS aspect.

          Lol as if you would every do anything with productivity software. Your only understanding of it comes from Benchmarks and even there you need to cherry pick whatever floats your boat, for example you don't bring up that currently the Nvidia cards have had a performance regression in Blender 4.0. And while no one seems to have done this for Blender 4.1, the Opendata benchmark site would suggest that Nvidia cards regressed even further with 4.1 while AMD actually saw an increase in their score.

          So oh no, terrible news right? Don't use Blender 4.1 with Nvidia! Except no, since Blender 4.1 comes with a ton of features that makes the performance regression in those benchmarks negligible, one of my favorite ones being Geometry node Baking which has doubled the performance in a lot of my scenes for both Cycles and Eevee with the click of a button. And if you actually learned anything from my previous explanations on VRAM to you before, you might remember that Baking is where you precalculate something increasing VRAM but also increasing performance which has now been added to Geometry nodes.

          Meanwhile an absolute madman on the Blender subredit used the new Intel Open Image Denoiser's GPU acceleration that came with Blender 4.1 to render a video in Cycles at 5FPS by lowering the sample count to 4 with an RTX 4070 TI super. So while the RTX 4070 TI super also saw a 10% drop in performance in the Blender benchmark since 3.6, it has better actual rendering performance when actually rendering as OIDN is on by default and now is significantly faster due to GPU acceleration being added.
          As a small side node the denoiser also uses VRAM, and the tooltip even warns you that the bigger the scene the more VRAM is required. From my own testing the OIDN still uses less VRAM than the Nvidia Optix denoiser while offering better results and to top it off it works on all 3 GPUs unlike the Optix denoiser.

          But neither of these performance gains are visible on the benchmark, as most of the benchmark scenes used by opendata are older and still use particle systems where these days tutorials would suggest and use geometry nodes and OIDN is technically post processing to get a clearer image from lower samples (in the above case single digits which is pretty nuts) while the open data benchmark score is the estimated number of samples per minute, summed for all benchmark scenes. So while the card now produces less samples in the latest Blender version, it still actually renders faster not to mention a ton of other great features like Light Linking.
          Huh? You're so full of manure...lol.... how is the power consumption better than a 4070 Ti Super? What are you looking at?

          No one in their right mind would try to argue a 7900 series even the 7900 GRE has better or more efficient power consumption than a 4070 Ti Super - maybe you meant the 4070? It's about even with that card on his graph.

          Even most AMD fans will concede that the 4070 series is more power efficient than any AMD 7900 RDNA 3 card.

          AMD gpus still suck in Blender and only use HIP - which is slower than the hacked ZLUDA - and the 7900 xtx is slower at rendering than the 4070 series - whether it's the 12gb vanilla card or a 4070 Super also at 12gb of vram or the 16gb 4070 Ti Super - yes, those cards are able to use optiX so a bit unfair but then it's AMD and Blender's fault - one or both of them - that haven't been able to use the OPEN SOURCE ray tracing element of HIP-RT.

          So, stop smoking all those drugs you have.

          Comment


          • Originally posted by Panix View Post
            Huh? You're so full of manure...lol.... how is the power consumption better than a 4070 Ti Super? What are you looking at?
            image.png
            7900 GRE | Min : 6 | Avg :200.15 | Max :240
            4070 TI Super | Min 10.47 | Avg 228.6 | Max 284.93


            Now I might not be a mathematician but I think 228 is a bigger number compared to 200 and 284 is a bigger number compared to 240.
            Seems my previous message had a typo and was meant to say "better power consumption and performance per dollar", normally I would say my bad but commenting to you is such a draining chore as you seem to barely be able to read any of the articles you link yourself and honestly writing them feels like a waste of time.

            The 7900GRE is rated for 260W TDP, the 4070 TI Super is 285W TDP. But I guess that would require explaining TDP to you.
            Again you commented on that article, the least you can do is actually read the articles you comment on. Oh wait you don't really do that, do you; still evident as you point towards 60HZ V-sync and power spikes not understanding that power spikes are not an actual benchmark of any sort and in that case specifically came from running the Furmark as evident from using your eyes on the literally graph on the top of the page of the techpowerup article you linked. While during Gaming and Video Playback this spike wasn't there at all. And of course Vsync is outdated, not to mention optional and generally speaking looked down upon, if you need to enable 60HZ Vsync, you don't need a new GPU, you need a new monitor.

            Originally posted by Panix View Post
            AMD gpus still suck in Blender and only use HIP
            It amazes me how every time, you showcase how absolutely little you know about Blender.
            The Eevee render engine, again the default render engine for Blender, uses OpenGL. So does Workbench, the render engine used for the viewport during modeling, sculpting and animation preview. Eevee-next, Blender's upcoming Render engine that uses Ray Tracing will use Vulkan (and most likely will become the default in the near future). And I have shared you countless benchmarks where AMD has proven they have decent performance in one and absolutely excel in the other compared to Nvidia.

            Originally posted by Panix View Post

            - which is slower than the hacked ZLUDA - and the 7900 xtx is slower at rendering than the 4070 series - whether it's the 12gb vanilla card or a 4070 Super also at 12gb of vram or the 16gb 4070 Ti Super - yes, those cards are able to use optiX so a bit unfair but then it's AMD and Blender's fault - one or both of them - that haven't been able to use the OPEN SOURCE ray tracing element of HIP-RT.
            And again all those performance metrics you mention are for Cycles without OIDN and the benchmarks files are un-optimized, they have been for a long time (note how this guy talks about 400 samples and I linked a thread in the post you quoted where someone was using 4 samples in Blender 4.1 and that 400 isn't even the default for the BMW27_GPU benchmark, if you download it today, BMW27_GPU still has 1225 samples). And as said before AMD has their own Render engine, that can be used inside of Blender, that uses RT acceleration, so there is that option if you want RT acceleration so dearly (you even get a free Material Library).

            If I run the BMW benchmark at 1440P with about 12 Samples but turning the denoiser and adaptive sampling on (as they are off since they did not exist back in 2016, but are now on by default whenever you make a new project) and disable Tiling + BVH, I get a render time of 06.3 seconds. Meanwhile someone compared Optix and CUDA on the RTX 4090 using BMW at 1440P clearly kept the default settings ("Sample x/1225" during rendering) and achieved a time of 41.64s on Cuda and 28.46s on Optix, I must have some kind of insane futuristic graphics card to pull of 06.3s, a whopping 1/4th the render time of an RTX 4090 using Optix!! Or I know how to optimize render settings and have been around long enough to know that BMW is outdated and is also present in the test suite of the Blender Opendata benchmark to calculate the samples/min. And this isn't even mentioning the ways the actual materials in the scene can be optimized as well (the benchmark's last update dates from 2016 after all). Add Shaders nodes that can be replaced by the Principled BSDF, Mixed Shader Nodes for the roughness which could be baked on the already existing UVs etc etc
            Screenshot from 2024-05-04 02-36-59.png

            Not to mention that I told you countless times (even linking to articles of multiple sites on that topic, which aren't made by techreviewers but actual Blender artists) that depending on your use case it is very likely you won't ever need Cycles and by extension HIP/HIP-RT, I think I asked you a million times about what it is you wish to make in Blender but it seems you care more about Blender as a benchmark compared to the amazing 3D tool that it is, because the absolute truth is that you are incredibly biased and hate AMD, I have linked to everything so many times and each time you bring up the same points in the same disingenuous and untrue way. You don't care, you're not going to use Blender, that much is clear. You just need something to bash AMD on, and since it can't be gaming you need to find the next most popular thing (which is funny as for the 7900GRE you point to a lot of gaming focused articles).

            Originally posted by Panix View Post

            and the 7900 xtx is slower at rendering than the 4070 series - whether it's the 12gb vanilla card or a 4070 Super also at 12gb of vram or the 16gb 4070 Ti Super.

            Remember the Blender 3.6 Deepdive I linked you before; where the RT acceleration barely affected performance in one of the Cycles benchmarks, remember how the RTX 4070 performed slightly worse than the RX7900XTX and RX7900XT in both RT on and off, remember the comment you quoted just now and how I told you that Blender 4.0 and 4.1 has performance regression on every Nvidia card but AMD with the RX7900XTX is seeing performance increases on both versions. Heck I would ask you if you remember the Blender Rookie video on how the RTX 3060 12GB beat the RTX 3070 (8GB) because the benchmark in question was VRAM intensive so system/normal RAM had to be used for the 307 and that it took the RTX 3070 more than double the time (4 min 52 seconds) compared to the RTX 3060 12GB (2min 9seconds) showing the absolute importance VRAM can have (and 24GB is double that of 12GB). So saying "the 7900 xtx is slower at rendering than the 4070 series" is plain wrong, it's more along the lines of "the 7900 xtx is sometimes slower at rendering with Cycles than the 4070 series depending on the scene (and 4070 card) in question".

            AMD works ​with Blender, for some usecases it is faster, for some it is not, HIP-RT Is not a requirement not even for Cycles.
            If your usecase or workflow benefits from Nvidia you go with Nvidia, if your usecase or workflow benefits from AMD you go with AMD (and yes quite a few do, not all, but enough that AMD is viable). What you don't do is focus on a benchmark that is barely representative of 1 workflow all while not even using the software in question and pretending as if you're an expert while only referencing techreviewer benchmarks (which tend to be lazily done to begin with) and on top of that disregarding any fact or knowledge coming from someone with actual experience in said software just to bash a singular GPU manufacturer, as that would be pretty pathetic.

            Originally posted by Panix View Post

            So, stop smoking all those drugs you have.
            Maybe you need to start taking the ones the doctor is prescribing to you, because the memory loss is getting quite bad.
            Last edited by tenchrio; 08 May 2024, 06:26 AM.

            Comment


            • Originally posted by Panix View Post
              Huh? You're so full of manure...lol.... how is the power consumption better than a 4070 Ti Super? What are you looking at?

              No one in their right mind would try to argue a 7900 series even the 7900 GRE has better or more efficient power consumption than a 4070 Ti Super - maybe you meant the 4070? It's about even with that card on his graph.

              Even most AMD fans will concede that the 4070 series is more power efficient than any AMD 7900 RDNA 3 card.

              AMD gpus still suck in Blender and only use HIP - which is slower than the hacked ZLUDA - and the 7900 xtx is slower at rendering than the 4070 series - whether it's the 12gb vanilla card or a 4070 Super also at 12gb of vram or the 16gb 4070 Ti Super - yes, those cards are able to use optiX so a bit unfair but then it's AMD and Blender's fault - one or both of them - that haven't been able to use the OPEN SOURCE ray tracing element of HIP-RT.

              So, stop smoking all those drugs you have.
              I have a questions ........

              How can someone be full of manure? Sorry English isn't my first language and so I don't understand.

              Comment

              Working...
              X