Announcement

Collapse
No announcement yet.

NVIDIA Makes The PhysX 5.1 SDK Open-Source

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by WannaBeOCer View Post
    Nvidia GPUs are a 3.5 product while providing much more functionality than AMDs RDNA? Last I checked when a product gives more functionality it cost more. Another main reason I went with Nvidia is due to the extra functionality, while with the release of RDNA AMD dropped ROCm support until recently. Even now deep learning performance is slightly slower than a Radeon VII when using a RX 6800 XT/6900 XT.
    What functionality is that? About the only thing I know of is CUDA which is stronger on Nvidia right now. DLSS is pretty much playing a game of leapfrog with FSR and raytracing is still a gimmick, will be a feature I care about around 2026 or so when most games require it. As for AI, it is pretty overrated since most AI neural networks only need to get trained once and you can usually buy that capacity from Amazon when you need it. Once trained, the 100kb weights file ensures smooth operations even on a freakin' AMD APU with hundreds of responses a second. But sure, if you actually need to train AI 24/7 then a 4090 is the thing to reach for. How many need that, realistically speaking? And how many gamers have the screen(s) to take advantage of that 4090?

    This discussion feels like an Apple iSheep trying to explain to me why the Apple camera is SO MUCH BETTER and I simply must plop down $1500 for an iPhone 13 Pro, when my current 3 year old $500 Android smartphone does everything I need it to already, and takes better pictures than my 2004 system camera. Sure, features. Mmmhmmm. Meanwhile I'd rather buy an electric scooter for those $1000. But hey, whatever you do with your money is up to you! Personally, I think you have a lot to learn about value and how it changes depending on the person who uses it.

    Originally posted by WannaBeOCer View Post

    I didn’t follow the RX 5000 series Linux drivers but on Windows AMD’s drivers were a joke for 6 months. I swear AMD and “5000” is cursed because their HD 5000 series were also plagued with driver issues.

    After reading this article and comments I don’t think I’d consider an AMD GPU: https://www.phoronix.com/news/AMDGPU-Fix-For-5.19-Bug
    How HORRIBLE, BUGS EXIST on AMD, this must make AMD the shittiest GPU vendor EVER! That would NEVER happen on Nvidia... Oh wait...

    https://download.nvidia.com/XFree86/...ownissues.html

    And that's just the ones Nvidia tells us about, seems to be more to find if you dig deep enough. Nvidia doesn't have much advantage over AMD if you run Linux, use Wayland and have no real use for CUDA.
    Last edited by wertigon; 09 November 2022, 04:05 PM.

    Comment


    • #32
      Originally posted by bob l'eponge View Post
      The GPU part is closed source in (not provided) libcuda.so.
      Yeah. Neither are the Windows drivers. Or the Windows kernel, for that matter. Yet I don't see how that can change the licensing terms of this code drop.

      But maybe one could think about this the other way: since physx is open source, it should be possible to break the dependency on libcuda, don't you think?

      Comment


      • #33
        Originally posted by wertigon View Post

        What functionality is that? About the only thing I know of is CUDA which is stronger on Nvidia right now. DLSS is pretty much playing a game of leapfrog with FSR and raytracing is still a gimmick, will be a feature I care about around 2026 or so when most games require it. As for AI, it is pretty overrated since most AI neural networks only need to get trained once and you can usually buy that capacity from Amazon when you need it. Once trained, the 100kb weights file ensures smooth operations even on a freakin' AMD APU with hundreds of responses a second. But sure, if you actually need to train AI 24/7 then a 4090 is the thing to reach for. How many need that, realistically speaking? And how many gamers have the screen(s) to take advantage of that 4090?

        This discussion feels like an Apple iSheep trying to explain to me why the Apple camera is SO MUCH BETTER and I simply must plop down $1500 for an iPhone 13 Pro, when my current 3 year old $500 Android smartphone does everything I need it to already, and takes better pictures than my 2004 system camera. Sure, features. Mmmhmmm. Meanwhile I'd rather buy an electric scooter for those $1000. But hey, whatever you do with your money is up to you! Personally, I think you have a lot to learn about value and how it changes depending on the person who uses it.



        How HORRIBLE, BUGS EXIST on AMD, this must make AMD the shittiest GPU vendor EVER! That would NEVER happen on Nvidia... Oh wait...

        https://download.nvidia.com/XFree86/...ownissues.html

        And that's just the ones Nvidia tells us about, seems to be more to find if you dig deep enough. Nvidia doesn't have much advantage over AMD if you run Linux, use Wayland and have no real use for CUDA.
        This post is hilarious, pretty much every University uses Nvidia GPUs for scientific computing including consumer GPUs this includes deep learning. Ray Tracing isn't a gimmick, it's only a gimmick to you since you're using an AMD gpu. Especially when we look at content creation, ray tracing excels and crushes the competition: https://www.phoronix.com/review/blender-33-nvidia-amd

        When you have drivers that make GPUs unusable yes they are horrible. When you have every reviewer asking a company when they're going to fix their drivers and never receive a response from them I wouldn't want to touch any of their products.

        We asked AMD about when it's going to get its drivers working better to address common, critical issues like crashing with RX 5700 XT, 5600 XT, and other GPU...

        Join us on Patreon: https://www.patreon.com/posts/33902352Merch: http://crowdmade.com/hardwareunboxedGeForce RTX 2070 Super - https://amzn.to/2FNCMJ6GeForce ...

        https://wccftech.com/amd-radeon-rx-g...kering-issues/

        Where am I trying to convince you to switch to the latest and greatest? If what ever GPU you have works for you great but seeing AMD time and time again fail with software doesn't instill confidence. Last time Nvidia had a devasting software issue like this was the RTX 30 series release and they fixed that issue within a week. Intel for example being new to discrete GPUs was slammed by reviewers regarding their Intel Arc drivers and instead of ignoring their customers they acknowledged the issues and their drivers keep improving. I'd rather pay the extra for the superior performance from their fixed function hardware and quality proprietary software than the unexpected of open source.

        Comment


        • #34
          Originally posted by WannaBeOCer View Post
          When you have drivers that make GPUs unusable yes they are horrible.

          https://www.reddit.com/r/ManjaroLinu...driver_update/
          https://unix.stackexchange.com/quest...ideo-and-audio

          Look ma! I can also cherry pick posts and responses to prove my ePenis is the biggest of them all! There are plenty of examples out there on both camps. Linux is more supported by AMD than Nvidia; that does not make said support flawless.

          Originally posted by WannaBeOCer View Post
          ​This post is hilarious, pretty much every University uses Nvidia GPUs for scientific computing including consumer GPUs this includes deep learning.
          So? I do not do this. Feature is useless for my use case. So why should I pay a premium for this? I just want to play games, not train a neural network 24/7.

          Originally posted by WannaBeOCer View Post
          ​Ray Tracing isn't a gimmick, it's only a gimmick to you since you're using an AMD gpu. Especially when we look at content creation, ray tracing excels and crushes the competition: https://www.phoronix.com/review/blender-33-nvidia-amd
          According to that link the 3070 Ti is only ~25-60% faster than the 6700 XT. That is not nothing, but it's not 110% faster and thus justify the cost. The 6700 XT gives better price/perf for 3D content creation judging between those two cards. It also does this while consuming ~60% of the power. Is it the best cost/perf card for content creation specifically? Probably not.

          Ray tracing in games, however, is nothing more than a gimmick. I am a gamer; not a content creator. Ray tracing is not very useful for my use case; it is certainly not worth the increase of 110% of the cost.

          Here is a dirty little secret; most people with a PC *doesn't* run Photoshop. Or Blender. Or OBS Studio. For these, AMD is clearly the superior choice to run Linux on.

          Originally posted by WannaBeOCer View Post
          Where am I trying to convince you to switch to the latest and greatest? If what ever GPU you have works for you great but seeing AMD time and time again fail with software doesn't instill confidence. Last time Nvidia had a devasting software issue like this was the RTX 30 series release and they fixed that issue within a week. Intel for example being new to discrete GPUs was slammed by reviewers regarding their Intel Arc drivers and instead of ignoring their customers they acknowledged the issues and their drivers keep improving. I'd rather pay the extra for the superior performance from their fixed function hardware and quality proprietary software than the unexpected of open source.

          There are examples of long running bugs on the Nvidia side too. How about Wayland being outright broken on Nvidia for several years, for instance? Or Nvidia still not being GL 4.6 conformant?

          Nvidia got lucky that the fix was easily identifiable and easily fixable last time. The current hardware issues with the 4090 power delivery does not exactly scream quality either to me...

          Comment


          • #35
            Originally posted by wertigon View Post

            https://www.reddit.com/r/ManjaroLinu...driver_update/
            https://unix.stackexchange.com/quest...ideo-and-audio

            Look ma! I can also cherry pick posts and responses to prove my ePenis is the biggest of them all! There are plenty of examples out there on both camps. Linux is more supported by AMD than Nvidia; that does not make said support flawless.



            So? I do not do this. Feature is useless for my use case. So why should I pay a premium for this? I just want to play games, not train a neural network 24/7.



            According to that link the 3070 Ti is only ~25-60% faster than the 6700 XT. That is not nothing, but it's not 110% faster and thus justify the cost. The 6700 XT gives better price/perf for 3D content creation judging between those two cards. It also does this while consuming ~60% of the power. Is it the best cost/perf card for content creation specifically? Probably not.

            Ray tracing in games, however, is nothing more than a gimmick. I am a gamer; not a content creator. Ray tracing is not very useful for my use case; it is certainly not worth the increase of 110% of the cost.

            Here is a dirty little secret; most people with a PC *doesn't* run Photoshop. Or Blender. Or OBS Studio. For these, AMD is clearly the superior choice to run Linux on.



            There are examples of long running bugs on the Nvidia side too. How about Wayland being outright broken on Nvidia for several years, for instance? Or Nvidia still not being GL 4.6 conformant?

            Nvidia got lucky that the fix was easily identifiable and easily fixable last time. The current hardware issues with the 4090 power delivery does not exactly scream quality either to me...
            I disagree, Nvidia themselves provide overall better Linux support than AMD. If that wasn’t the case we would see more AMD cards used in every single data center. You cherry picked distribution issues, not Nvidia driver related issues.

            ~25-60% faster in rendering could equal hours/days less to render a scene. You need to step outside of your gaming bubble, PC workstations are used for more than just gaming. Why should Nvidia start working on Wayland when it doesn’t provide feature parity to xorg?

            https://forums.developer.nvidia.com/...release/214275

            There’s a thing called a console that would be more suitable and cheaper for your use case.
            Last edited by WannaBeOCer; 09 November 2022, 06:33 PM.

            Comment


            • #36
              Originally posted by WannaBeOCer View Post
              I disagree, Nvidia themselves provide overall better Linux support than AMD. If that wasn’t the case we would see more AMD cards used in every single data center. You cherry picked distribution issues, not Nvidia driver related issues.
              Same reason there's still more Intel servers out there despite Epyc being the superior CPU.

              Originally posted by WannaBeOCer View Post
              ​~25-60% faster in rendering could equal hours/days less to render a scene.
              If that is a concern then getting the fastest render card money can buy makes much more sense than buying a 3070 Ti. But then, we're going way beyond consumer grade hardware. You would want something like the RTX A4000, because the cost of productivity is whatever your hourly salary is and if you can double rendering time that means for every hour you render, that is another $50 saved. Render for 40 hours a week, and you save ~$2000. Payback time is less than 12 months.

              If this is *not* a concern the previous point still stands. You do not build a serious workstation with a 3070 Ti, nor a 6700 XT.

              Originally posted by WannaBeOCer View Post
              ​You need to step outside of your gaming bubble, PC workstations are used for more than just gaming.
              How many percent of the entire consumer grade PC market is workstations? 1%, ish?

              Originally posted by WannaBeOCer View Post
              ​​Why should Nvidia start working on Wayland when it doesn’t provide feature parity to xorg?

              https://forums.developer.nvidia.com/...release/214275
              Because not doing it will start losing them even more sales in the Linux space?

              Wayland will never have feature parity because a lot of XOrg features can be better supported by other protocols (like, for instance, pipewire for networking and screen sharing).

              Originally posted by WannaBeOCer View Post
              ​​​There’s a thing called a console that would be more suitable and cheaper for your use case.
              Sorry, I need access to developer tools like GCC, GDB and WireShark to do my work.

              Comment


              • #37
                Originally posted by wertigon View Post

                Same reason there's still more Intel servers out there despite Epyc being the superior CPU.



                If that is a concern then getting the fastest render card money can buy makes much more sense than buying a 3070 Ti. But then, we're going way beyond consumer grade hardware. You would want something like the RTX A4000, because the cost of productivity is whatever your hourly salary is and if you can double rendering time that means for every hour you render, that is another $50 saved. Render for 40 hours a week, and you save ~$2000. Payback time is less than 12 months.

                If this is *not* a concern the previous point still stands. You do not build a serious workstation with a 3070 Ti, nor a 6700 XT.



                How many percent of the entire consumer grade PC market is workstations? 1%, ish?



                Because not doing it will start losing them even more sales in the Linux space?

                Wayland will never have feature parity because a lot of XOrg features can be better supported by other protocols (like, for instance, pipewire for networking and screen sharing).



                Sorry, I need access to developer tools like GCC, GDB and WireShark to do my work.
                You do realize the RTX 3070 Ti is actually faster than the A4000 when it comes to certain programs like Blender? Not everyone needs ECC memory. Unlike Intel, Nvidia doesn't really have competition as already shown by OptiX/Tensor cores when it comes to workstation GPUs. Only competition Nvidia has is from AMD is in HPC against the Mi200 which already gets outperformed by the H100. At the end of the day Nvidia provides more functionality better performance and better support. It cost a bit extra but from my own experience it has been worth it.

                Even Intel's Arc A750 and A770 outperforms the W6800 in Blender.

                Comment


                • #38
                  WannaBeOCer use-case A is better on Nvidia so it's the superior product

                  wertigon yes but use-case B is better on AMD

                  WannaBeOCer yes, but use-case A ...

                  wertigon yes, but use-case B ...

                  the info on each use-case and how each GPU adapts to it is interesting though, so please carry on

                  Comment


                  • #39
                    Originally posted by marlock View Post
                    WannaBeOCer use-case A is better on Nvidia so it's the superior product

                    wertigon yes but use-case B is better on AMD

                    WannaBeOCer yes, but use-case A ...

                    wertigon yes, but use-case B ...

                    the info on each use-case and how each GPU adapts to it is interesting though, so please carry on
                    I missed the use case where AMD is better, can you please point it out? All I see is AMD remaining to be the budget gaming brand.

                    Comment


                    • #40
                      All AMD has
                      1. Open source driver good for only gaming. Slightly better compatibility than nvidia.
                      2. Better wayland support and integrates better with Linux desktop in general.

                      AMD is shit in almost all other aspects compared to nvidia in Linux. Heck, Even Intel is beating equivalent amd gpu in blender.

                      Comment

                      Working...
                      X