Announcement

Collapse
No announcement yet.

AMD Publishes FSR 3 Source Code

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by qarium View Post

    but the AMD PRO w7900 is the same chip as the 7900XTX and i only use the same kernel driver and same mesa userspace driver. i do not use the closed source AMD.com driver.
    the only difference between a 7900XTX and the w7900 is the difference in vram 24gb vs 48gb and the w7900 has a little slower ram.
    do you really think there is any relevant difference between the 7900XTX you want and a w7900 ? professionals buy the w7900 because it has ECC vram and the 7900XTX has only normal ram.

    so of course i can test whatever you want.



    phoronix.com does Blender benchmarks with the 7900XTX... who cares about windows anyway ?
    ROCm/HIP works with blender
    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


    HIP-RT is released but i do not know any benchmarks
    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


    maybe we get michael to benchmark again with 4.0
    Michael
    kann we have AMD Radeon 7900XTX benchmarks with blender 4.0 or newer with HIP and HIP-RT ???​ agaist cuda and optiX ?

    "compares to any of the higher tier Ampere cards"

    I do not have any Nvidia card my last nvidia card was from 2012...



    Michael can we also have Davinci Resolve benchmarks on linux for a amd radeon 7900XTX?

    can you tell me why you believe there is any big difference between a W7900 and a 7900XTX ? the 7900XTX has even faster ram and many cards have OC bios this means its faster than a w7900...
    I suppose there is little difference? I wouldn't know - I suspect not much - there's only 5 samples of a W7900 on the open data blender website - but, the difference 3600 vs 3900 score (7900 xtx - has 400 samples) so, I would guess you're probably right, here? However, this is the median scores. I'm not sure how they get these scores - I just compare the various gpus and the corresponding score.

    You could try in Linux if you want - if you know how to set it up - I wouldn't know. In fact, some ppl say you need some closed source components - there's an Arch Linux page for Blender and AMD gpus - I think a Fedora and Nobara page about it, too, but I wouldn't swear on it - just from little I recall.

    Perhaps, Michael is waiting for more stability - for it to move from experimental to stable/official or something? Also, I have no idea if it works in Linux now in version 4.0 of Blender - I think the release notes would have mentioned something then? This is one of my *beefs* with AMD (gpus) - the slow progress - and I am just using Blender as one e.g. Some ppl might not think so or care - but, I think it's a legit critique and if one reads all those notes (the links I provided) - you can see ppl who buy the cards or consider buying one - are frustrated. :-/

    Comment


    • #82
      Originally posted by Panix View Post
      I suppose there is little difference? I wouldn't know - I suspect not much - there's only 5 samples of a W7900 on the open data blender website - but, the difference 3600 vs 3900 score (7900 xtx - has 400 samples) so, I would guess you're probably right, here? However, this is the median scores. I'm not sure how they get these scores - I just compare the various gpus and the corresponding score.

      as soon as the sample fit into the 24GB vram then the 7900XTX is faster than the W7900... the W7900 is only faster if the sample is bigger than 24GB vram.
      i already told you that some highly professional studios avoid any GPU instead they use CPUs with 512GB ram and more. this is not because of performance this is only because their scenes are so big in size that they do not fit in 24GB VRAM.
      the W7900 with 48GB is a great benefit if you do AI deep learning with Pytorch if your modell is to big to fit into 24GB vram.

      but thats not the relevant question at all the relevant question is this: ROCm/HIP is like the CUDA backend it can not compete with OptiX
      now the Question is how the HIP-RT version competes with OptiX and i can not tell you this because i do not know.

      but i am sure [email protected] can do the benchmark. Michael

      Originally posted by Panix View Post
      You could try in Linux if you want - if you know how to set it up - I wouldn't know. In fact, some ppl say you need some closed source components - there's an Arch Linux page for Blender and AMD gpus - I think a Fedora and Nobara page about it, too, but I wouldn't swear on it - just from little I recall.
      Perhaps, Michael is waiting for more stability - for it to move from experimental to stable/official or something? Also, I have no idea if it works in Linux now in version 4.0 of Blender - I think the release notes would have mentioned something then? This is one of my *beefs* with AMD (gpus) - the slow progress - and I am just using Blender as one e.g. Some ppl might not think so or care - but, I think it's a legit critique and if one reads all those notes (the links I provided) - you can see ppl who buy the cards or consider buying one - are frustrated. :-/
      yes i could try to use HIP-RT. in the past i did only test ROCm/HIP

      as i said to you already the struggle is real. but its also true that many stuff people say is outdated. but if you have bad reputation from the past this is only very slowly to heal thats a fact.

      this comes all down to the fact that in 2017 AMD had no money to finance any Compute effords... they where near bankruptcy back then. but 2023 they have very different status. and if you analyse the numbers intel is in no good position to amd... and Nvidia has also problems on the CPU side their CUDA monopole does not help with that. they try to fix it with ARM cpus you know.
      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • #83
        Originally posted by qarium View Post

        as soon as the sample fit into the 24GB vram then the 7900XTX is faster than the W7900... the W7900 is only faster if the sample is bigger than 24GB vram.
        i already told you that some highly professional studios avoid any GPU instead they use CPUs with 512GB ram and more. this is not because of performance this is only because their scenes are so big in size that they do not fit in 24GB VRAM.
        the W7900 with 48GB is a great benefit if you do AI deep learning with Pytorch if your modell is to big to fit into 24GB vram.

        but thats not the relevant question at all the relevant question is this: ROCm/HIP is like the CUDA backend it can not compete with OptiX
        now the Question is how the HIP-RT version competes with OptiX and i can not tell you this because i do not know.

        but i am sure [email protected] can do the benchmark. Michael
        yes i could try to use HIP-RT. in the past i did only test ROCm/HIP

        as i said to you already the struggle is real. but its also true that many stuff people say is outdated. but if you have bad reputation from the past this is only very slowly to heal thats a fact.

        this comes all down to the fact that in 2017 AMD had no money to finance any Compute effords... they where near bankruptcy back then. but 2023 they have very different status. and if you analyse the numbers intel is in no good position to amd... and Nvidia has also problems on the CPU side their CUDA monopole does not help with that. they try to fix it with ARM cpus you know.
        1) correct - But, I don't have the budget for a Threadripper cpu or something like that - I'm just going to use what I have - which is a modern 12 core cpu. It will have to do for now. Also, my budget is working towards the price of a 7900 xt or maybe (dream) 7900 xtx - else, I'll bite my lip, hold my nose and get a used 3090 or something like that.
        2) I don't have the budget for a workstation card - I know that is ideal for this stuff - but, like I said, I might game sometimes, too. It'll be an all-purpose card but I do prefer the extra vram if I can - so 16+.
        3) No one seems to know - I've read a lot of speculation though - it seems the word on the street is that HIP-RT won't be competitive but I thought if any of the gpus will be, it'll be the 7900 series - but how good/bad will it be, I don't know. There's also the two-pronged problem - using it in Windows - it 'might' work now and Linux - ultimately, it might work and thus, it might have better performance (ultimately?) than in Windows - as that has been the trend for Blender, afaik.

        My guess is HIP-RT is barely working in Windows right now but, won't work in Linux atm - as there seems to be a lot of 'stuff to figure out' - since HIP-RT is not open source - the ray tracing factor - HIP is, ROCm is - but the ray tracing aspect (library, API?) isn't.... but, don't ask me.

        Comment


        • #84
          Originally posted by Panix View Post
          1) correct - But, I don't have the budget for a Threadripper cpu or something like that - I'm just going to use what I have - which is a modern 12 core cpu. It will have to do for now. Also, my budget is working towards the price of a 7900 xt or maybe (dream) 7900 xtx - else, I'll bite my lip, hold my nose and get a used 3090 or something like that.
          2) I don't have the budget for a workstation card - I know that is ideal for this stuff - but, like I said, I might game sometimes, too. It'll be an all-purpose card but I do prefer the extra vram if I can - so 16+.
          3) No one seems to know - I've read a lot of speculation though - it seems the word on the street is that HIP-RT won't be competitive but I thought if any of the gpus will be, it'll be the 7900 series - but how good/bad will it be, I don't know. There's also the two-pronged problem - using it in Windows - it 'might' work now and Linux - ultimately, it might work and thus, it might have better performance (ultimately?) than in Windows - as that has been the trend for Blender, afaik.
          My guess is HIP-RT is barely working in Windows right now but, won't work in Linux atm - as there seems to be a lot of 'stuff to figure out' - since HIP-RT is not open source - the ray tracing factor - HIP is, ROCm is - but the ray tracing aspect (library, API?) isn't.... but, don't ask me.
          of course HIP-RT will not be competive to OptiX and the reason is that the RTX 4000 nvidia cards have more parts of the Raytracing stuf accellerated by ASIC hardware.
          the AMD 6000/7000 cards have raytracing acceleration hardware but only light upgrade from a pure software solution. and the different between 6000 and 7000 cards in raytracing is only 17%

          but the point is HIP-RT should be faster than their ROCm/HIP backend. so if HIP is maybe to slow for your taste it could be the case that HIP-RT could be good enough for you. (i do not expect OptiX level of performance)

          a threatripper system for blender is cheaper than you think for example a upgrade for my 2 threadripper to a 32core is only 720€ :
          Entdecken Sie AMD Ryzen Threadripper 2990WX CPU 32 Cores 64 Threads Up to 4.2GHz L3 Cache 64MB in der großen Auswahl bei eBay. Kostenlose Lieferung für viele Artikel!

          with that you can go up to 256GB of ram.

          believe it or not but many customers go with a threadripper for blender to avoid the CUDA/OptiX monopole...
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • #85
            Originally posted by avis View Post

            That's ad hominem, i.e. a fallacy.



            That's a red herring, i.e. a fallacy.

            Despite this I will play by your rules: MESA and kernel devs don't give a fuck about proprietary NVIDIA drivers: they don't deal with them, they don't care about them and many Linux kernel developers have actively made NVIDIA's life worse by making kernel interfaces private (GPL only).

            Start with logic 101 then we could talk. Like with every sect, the most vocal and belligerent followers of the Linux sect despise logic.
            Lie much? The actual truth is that nVidia has been busted abusing GPLed interfaces without abiding by the terms of the GPL dozens of times and every time they get busted red handed the kernel devs are forced to take appropriate action...

            How about truth 101...

            EDIT: You can make up bullshit and call it logic all you want, but that's what makes you a liar...

            EDIT: nVidia could abide by the terms of the GPL and -THAT- would solve their own problems that -they- caused.
            Last edited by duby229; 26 December 2023, 02:02 PM.

            Comment


            • #86
              Originally posted by qarium View Post

              of course HIP-RT will not be competive to OptiX and the reason is that the RTX 4000 nvidia cards have more parts of the Raytracing stuf accellerated by ASIC hardware.
              the AMD 6000/7000 cards have raytracing acceleration hardware but only light upgrade from a pure software solution. and the different between 6000 and 7000 cards in raytracing is only 17%

              but the point is HIP-RT should be faster than their ROCm/HIP backend. so if HIP is maybe to slow for your taste it could be the case that HIP-RT could be good enough for you. (i do not expect OptiX level of performance)

              a threatripper system for blender is cheaper than you think for example a upgrade for my 2 threadripper to a 32core is only 720€ :
              Entdecken Sie AMD Ryzen Threadripper 2990WX CPU 32 Cores 64 Threads Up to 4.2GHz L3 Cache 64MB in der großen Auswahl bei eBay. Kostenlose Lieferung für viele Artikel!

              with that you can go up to 256GB of ram.

              believe it or not but many customers go with a threadripper for blender to avoid the CUDA/OptiX monopole...
              Seriously? I'm talking about getting a gpu upgrade, not investing into a Threadripper system. Are you buying it for me? Are prices really that cheap in Germany, suddenly?

              Comment


              • #87
                Originally posted by Panix View Post
                Seriously? I'm talking about getting a gpu upgrade, not investing into a Threadripper system. Are you buying it for me? Are prices really that cheap in Germany, suddenly?
                you said there is no other option than to buy a nvidia card. and obey to the cuda monopole.

                it was just a simple argument that many people reject the CUDA monopole and buy a threadripper instead.
                Phantom circuit Sequence Reducer Dyslexia

                Comment


                • #88
                  Originally posted by qarium View Post

                  you said there is no other option than to buy a nvidia card. and obey to the cuda monopole.

                  it was just a simple argument that many people reject the CUDA monopole and buy a threadripper instead.
                  I already know a Threadripper PC would probably make my complaints/concerns moot but I think I made it clear I already had a system/PC and was not in the market (read: can't afford a Threadripper build). If I was anywhere close - I could just budget for a 4090 or 7900 XTX and pick one of those?

                  The popular Open Source 3D Animation Software Blender has an in-built render-engine which can be used for testing CPU and GPU performance.


                  I have a 12700K w/ 64GB of RAM - I would have bought AMD - but, only Ryzen/Zen 3 was available and the 59xx series was EOL - Zen 4 wasn't out yet (I wanted to wait and get that but oh well). I went with Intel - since, there was still a bit of an upgrade path and I'd get Quick Sync for added benefit to video editing (possibly).

                  Anyway, that's all redundant - I was looking at gpus for Blender, video editing, some gaming - and it was a comparison between similarly priced gpus - in my country - used 3090, 4070 Ti, 7900 XT - and trying to decide what to pick / or wait or whatever. Threadripper isn't an option for me so I wasn't including it as any kind of choice - even though it is faster as just a cpu compared to the rest. It has nothing to do with promoting/rejecting CUDA.

                  Comment


                  • #89
                    Originally posted by Panix View Post
                    I already know a Threadripper PC would probably make my complaints/concerns moot but I think I made it clear I already had a system/PC and was not in the market (read: can't afford a Threadripper build). If I was anywhere close - I could just budget for a 4090 or 7900 XTX and pick one of those?
                    The popular Open Source 3D Animation Software Blender has an in-built render-engine which can be used for testing CPU and GPU performance.

                    I have a 12700K w/ 64GB of RAM - I would have bought AMD - but, only Ryzen/Zen 3 was available and the 59xx series was EOL - Zen 4 wasn't out yet (I wanted to wait and get that but oh well). I went with Intel - since, there was still a bit of an upgrade path and I'd get Quick Sync for added benefit to video editing (possibly).
                    Anyway, that's all redundant - I was looking at gpus for Blender, video editing, some gaming - and it was a comparison between similarly priced gpus - in my country - used 3090, 4070 Ti, 7900 XT - and trying to decide what to pick / or wait or whatever. Threadripper isn't an option for me so I wasn't including it as any kind of choice - even though it is faster as just a cpu compared to the rest. It has nothing to do with promoting/rejecting CUDA.
                    yes i get it. man your link look good for the threatripper really. they rock with blender.

                    right now i can not give you a educated answer about your "used 3090, 4070 Ti, 7900 XT" question.

                    i think it would be best to make sure we get new HIP-RT benchmarks from phoronix.com agaist OptiX

                    then you can make a educated purchas​
                    Phantom circuit Sequence Reducer Dyslexia

                    Comment


                    • #90
                      Originally posted by qarium View Post

                      yes i get it. man your link look good for the threatripper really. they rock with blender.

                      right now i can not give you a educated answer about your "used 3090, 4070 Ti, 7900 XT" question.

                      i think it would be best to make sure we get new HIP-RT benchmarks from phoronix.com agaist OptiX

                      then you can make a educated purchas​
                      Yep. But, NO ONE HAS DONE IT YET! *Hint, hint, hint* (for Michael...)...

                      Comment

                      Working...
                      X