Announcement

Collapse
No announcement yet.

NVIDIA Adds PhysX GPU Acceleration Support Under Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Kivada View Post
    G-Sync requires specialized hardware, AdaptiveSync/FreeSync are the actual open VESA spec implementations. CUDA only runs on Nvidia hardware, OpenCL runs on any processor type you have no matter the architecture, including the DSP/codec chip on your cell phone. No, Nvidia leads the way with vendor specific extensions that never get added to the spec.
    As many Nvidia bashers you are clueless about the technologies you are basing your arguments on.

    FreeSync is AMD's implementation of AdaptiveSync as G-Sync is Nvidia's implementation of the same standard. Both will require exactly the same hardware in the monitors, only fools would claim that G-Sync would need special hardware support FreeSync can do without. CUDA is not a proprietary equivalent of OpenCL. CUDA is a parallel computing architecture, implementing a bunch of APIs and languages, including OpenCL. It has been years since Nvidia opened up their compiler.

    You are totally clueless about the OpenGL specification. Most major and minor contributions come from Nvidia, without them we would never have an API that's able to compete with Direct3D.

    Comment


    • #32
      Originally posted by efikkan View Post
      As many Nvidia bashers you are clueless about the technologies you are basing your arguments on.

      FreeSync is AMD's implementation of AdaptiveSync as G-Sync is Nvidia's implementation of the same standard. Both will require exactly the same hardware in the monitors, only fools would claim that G-Sync would need special hardware support FreeSync can do without. CUDA is not a proprietary equivalent of OpenCL. CUDA is a parallel computing architecture, implementing a bunch of APIs and languages, including OpenCL. It has been years since Nvidia opened up their compiler.

      You are totally clueless about the OpenGL specification. Most major and minor contributions come from Nvidia, without them we would never have an API that's able to compete with Direct3D.
      From my "limited" understanding, FreeSync is possible on certain displays with a firmware update; but is otherwise open for any vendor to use (need clarification on this). G-Sync on the other hand may rely on the same concept, but is NVIDIA-only (no Intel, no AMD).

      First time I heard of CUDA implementing OpenCL. Is there a source on how this actually works? When I hear CUDA, I think a GPU-processing language; same with OpenCL (I know it can be ran on CPU as well though).

      My main gripes with NVIDIA are Tegra and PhysX, and I've been screwed over by both. Had a 2012 Nexus 7, and some "Tegra-optimized" games. Couldn't use those games on any other device as-is. The real kicker is, that if you forced some Tegra games to run on superior non-Tegra hardware, they would work fine, and even perform faster. Tegra is nothing but a gimmick, and in my case, I have about $10 of games I can't use.

      Just to be clear, Tegra-optimized games only introduce things like higher texture quality, better lighting, sparks, shadows, and jello liquid (things easily possible for higher-end graphics chips). Nothing innovative, and nothing deserving to be restricted to outdated hardware.

      As for PhysX, the CPU-accelerated version was horrible. Couldn't play many Unreal Engine 3 titles due to NVIDIA not taking the time to make the CPU-side PhysX optimized. But hey, GPU PhysX was alright of course.

      From a business standpoint, NVIDIA did nothing wrong in those scenarios. But I don't play games with companies. You make my experience garbage on hardware that isn't your own, don't expect me to pay you money to support your behavior.

      AMD has done nothing like the above afaik, and actually seems to care about non-restrictive tech. Remember when TressFX came out? It used DirectCompute. Worked fine on any GPU that could handle that (Microsoft) API. NVIDIA had a similar hair tech demo, but guess what it was restricted to? Luckily I think it only remained a demo. Then you have Mantle; which isn't restricted to AMD-only (but does currently only run on AMD). Of course other vendors would have to choose to implement it, and iirc, Intel showed some interest. NVIDIA of course didn't. Then there's OpenCL (vendor independent) and CUDA (NVIDIA-only).

      Originally posted by efikkan View Post
      You are wrong. CUDA is open, G-sync is an implementation of an open standard. Nvidia is also the main contributor behind OpenGL.
      Source? And by open, you're saying Intel or AMD can freely implement CUDA at will if they wanted? Because I'm seriously doubting that's the case...

      Comment


      • #33
        Originally posted by Espionage724 View Post
        From my "limited" understanding, FreeSync is possible on certain displays with a firmware update; but is otherwise open for any vendor to use (need clarification on this). G-Sync on the other hand may rely on the same concept, but is NVIDIA-only (no Intel, no AMD).
        And you are basing that on AMD using the term "free" in their branding? FreeSync and G-Sync are implementations of the same technologies. Current PC monitors lack this capability in their driver circuits, so Nvidia funded the development of a such chip. Obviously the first products cost a premium, as with all other technology, but will be baked into the monitor price as this becomes more widespread.

        Originally posted by Espionage724 View Post
        First time I heard of CUDA implementing OpenCL. Is there a source on how this actually works? When I hear CUDA, I think a GPU-processing language; same with OpenCL (I know it can be ran on CPU as well though).
        Then let me educate you:

        Like most people you think of CUDA C/C++, which is most commonly used. The open source compiler uses LLVM, and can in theory be expanded to complile almost anything to run on GPUs. In theory there is nothing stopping anyone from compiling CUDA C/C++ to other GPUs, provided you can find capable hardware. AMD actually have their C++ AMP to try competing with this.

        Originally posted by Espionage724 View Post
        AMD has done nothing like the above afaik, and actually seems to care about non-restrictive tech. Remember when TressFX came out? It used DirectCompute. Worked fine on any GPU that could handle that (Microsoft) API. NVIDIA had a similar hair tech demo, but guess what it was restricted to? Luckily I think it only remained a demo. Then you have Mantle; which isn't restricted to AMD-only (but does currently only run on AMD). Of course other vendors would have to choose to implement it, and iirc, Intel showed some interest. NVIDIA of course didn't. Then there's OpenCL (vendor independent) and CUDA (NVIDIA-only).
        Mantle, TrueAudio, etc. are all brilliant examples of AMD's openness, oh wait a minute, they're not. Mantle is supposed to be open, but it's not, yet AMD claims it is, and we are already a year in.
        Last edited by efikkan; 15 October 2014, 02:30 PM.

        Comment


        • #34
          Originally posted by efikkan View Post
          And you are basing that on AMD using the term "free" in their branding? FreeSync and G-Sync are implementations of the same technologies. Current PC monitors lack this capability in their driver circuits, so Nvidia funded the development of a such chip. Obviously the first products cost a premium, as with all other technology, but will be baked into the monitor price as this becomes more widespread.
          AMD apparently doesn't charge royalites for the use of FreeSync: http://support.amd.com/en-us/search/faq/220

          And also, FreeSync relies on a public standard (DisplayPort Adaptive-Sync). G-Sync requires proprietary G-Sync hardware built-into the monitor (which once again only benefits NVIDIA).

          Originally posted by efikkan View Post
          Then let me educate you:

          Like most people you think of CUDA C/C++, which is most commonly used. The open source compiler uses LLVM, and can in theory be expanded to complile almost anything to run on GPUs. In theory there is nothing stopping anyone from compiling CUDA C/C++ to other GPUs, provided you can find capable hardware. AMD actually have their C++ AMP to try competing with this.
          Hmm, I don't know all that much about programming languages exactly, so I can't really argue that. What constitutes "capable hardware"?

          Originally posted by efikkan View Post
          Mantle, TrueAudio, etc. are all brilliant examples of AMD's openness, oh wait a minute, they're not. Mantle is supposed to be open, but it's not, yet AMD claims it is, and we are already a year in.
          If Mantle is open, it would need other vendors to support it in order for it to work on other hardware. From what I recall, Intel has shown interest, but I'm unsure as to where this went exactly. At no point have I heard AMD claim that Mantle would be forever locked to only their hardware.

          As for TrueAudio, all it is is an audio chip on the GPU with some DSP. I'm sure the creative people at NVIDIA could do something similar if they really wanted to; and it's not exactly proprietary either. I guess it could be argued that TrueAudio from AMD isn't "open", but this is nothing compared to NVIDIA's tech.

          Comment


          • #35
            Originally posted by efikkan View Post
            Mantle, TrueAudio, etc. are all brilliant examples of AMD's openness, oh wait a minute, they're not. Mantle is supposed to be open, but it's not, yet AMD claims it is, and we are already a year in.
            When Mantle is done (1.0) it will be released and open. I don't see a good reason why AMD would want to keep Nvidia and Intel out. With CUDA, Physx and G-sync Nvidia has another reason for the consumer to go Nvidia.
            The chip for TrueAudio is licenced from a third party so Nvidia has the choice of going the same route, another third party or figure out how to program it in CUDA(most likely).

            Comment


            • #36
              Originally posted by efikkan View Post
              Then let me educate you:

              Like most people you think of CUDA C/C++, which is most commonly used. The open source compiler uses LLVM, and can in theory be expanded to complile almost anything to run on GPUs. In theory there is nothing stopping anyone from compiling CUDA C/C++ to other GPUs, provided you can find capable hardware. AMD actually have their C++ AMP to try competing with this.
              The bindings of the language and the definition of the language are irrelevant. The CUDA toolkit only compiles to proprietary binary language, only supported by NVidia hardware. Let's quote NVidia for good measure:
              Hence, source files for CUDA applications consist of a mixture of conventional C++ host code, plus GPU device (i.e., GPU-) functions. The CUDA compilation trajectory separates the device functions from the host code, compiles the device functions using proprietary NVIDIA compilers/assemblers, compiles the host code using a general purpose C/C++ compiler that is available on the host platform, and afterwards embeds the compiled GPU functions as load images in the host object file. In the linking stage, specific CUDA runtime libraries are added for supporting remote SIMD procedure calling and for providing explicit GPU manipulation such as allocation of GPU memory buffers and host-GPU data transfer.
              The CUDA compiler is not open (LLVM is permissive, which means you can add closed modules), CUDA runtime libs are not open, the binary representation is not open.

              How many projects do you know that embed CUDA code and make it run on non NVidia hardware?

              Comment


              • #37
                Originally posted by Espionage724 View Post
                AMD apparently doesn't charge royalites for the use of FreeSync: http://support.amd.com/en-us/search/faq/220

                And also, FreeSync relies on a public standard (DisplayPort Adaptive-Sync). G-Sync requires proprietary G-Sync hardware built-into the monitor (which once again only benefits NVIDIA).
                You are swallowing AMD's PR bullshit raw. FreeSync and G-Sync are both implementations of Vesa AdaptiveSync, the only major difference is Nvidia incorporated an DP 1.3 feature without waiting for the standard.
                For older monitors to support adaptive sync like G-Sync, they need to replace the driver circuits causing a premium price, like all early adapters of new technology. New high-end display circuits will have this support built-in without a steep premium, regardless if you are running Nvidia G-Sync, AMD FreeSync or upcoming Intel "WhateverSync".

                Originally posted by Espionage724 View Post
                Hmm, I don't know all that much about programming languages exactly, so I can't really argue that. What constitutes "capable hardware"?
                I'm talking of the programming capabilities of the hardware. To GPU programmers it's well known that Nvidia's GPUs offer way more flexibility than AMDs, currently no games utilize this, but professional compute customers leverage this a lot. It would be hard to make you understand the precise differences without intimate knowledge of programming languages. So I'll try an analogy instead: programming a GPU with flexible C/C++ compared to restrictive OpenCL would almost be like comparing programming a computers with punched cards compared to a flexible programming language like C. If you have a simple calculation OpenCL will be just as fast, but once you want to do more complex calculations, logic and data allocation, you need something more flexible.

                You can see the CUDA architecture even supports Fortran, and you might wonder why anyone would care about that? Well it's still heavily used in research institutions and even some companies. Having a GPU that's capable of parallelizing this on a massive scale is a tremendous achievement.

                Back to you question regarding capable GPUs. AMDs GCN architecture is unfortunately not advanced enough to compare compete with Nvidia's support, and could only implement a subset of such languages. I hope their replacement architecture (which should be out next year?) gets most of these features so both Nvidia and AMD can evolve it into a common open standard. Provided AMD create such GPUs, they can use customize this CUDA compiler to fit their own GPU architecture. I think an evolved "CUDA architecture" would be the best basis for a next generation "OpenCL" and even "OpenGL", doing all the computation on the GPUs with minimal instructions from the CPU and driver. This would unlike Mantle be a step forward.

                Originally posted by Espionage724 View Post
                If Mantle is open, it would need other vendors to support it in order for it to work on other hardware. From what I recall, Intel has shown interest, but I'm unsure as to where this went exactly. At no point have I heard AMD claim that Mantle would be forever locked to only their hardware.
                Currently it's not open, and that's the situation we have to deal with today.

                Originally posted by Ferdinand View Post
                When Mantle is done (1.0) it will be released and open. I don't see a good reason why AMD would want to keep Nvidia and Intel out. With CUDA, Physx and G-sync Nvidia has another reason for the consumer to go Nvidia.
                Until then, it remains closed. Lots of software get eventually get open, heck even versions of MS DOS.

                Beside the open/closed debate, there is a question of what relevance Mantle will have. Every day that passes by Mantle becomes more irrelevant, OpenGL already surpasses it with it's low latency features, and Direct3D 12 also will. Mantle only have a clear advantage for low performance systems, and might have some use in the current generation gaming consoles, where the CPU is very underpowered. As already stated, OpenGL already have efficient functions of rendering meshes, and even bindless graphics which moves more of the complexity into GPU shaders removing the need for some API calls, Mantle currently have nothing to compare to this. Mantle in it's current form is simply not the answer, what we need is a low level GPU programming language (C language), so programmers can do compute, graphics, audio, whatever.

                Comment


                • #38
                  Originally posted by efikkan View Post
                  You are swallowing AMD's PR bullshit raw. FreeSync and G-Sync are both implementations of Vesa AdaptiveSync, the only major difference is Nvidia incorporated an DP 1.3 feature without waiting for the standard.
                  For older monitors to support adaptive sync like G-Sync, they need to replace the driver circuits causing a premium price, like all early adapters of new technology. New high-end display circuits will have this support built-in without a steep premium, regardless if you are running Nvidia G-Sync, AMD FreeSync or upcoming Intel "WhateverSync".
                  That would be nice. That would mean that AMD could use G-sync monitors with a little driver change. And that would mean that it is safe to go with a g-sync monitor. It is not a nice idea being forced to remain with nvidia just because you bought a g-sync monitor. I won't believe you until I see it.
                  I'm talking of the programming capabilities of the hardware. To GPU programmers it's well known that Nvidia's GPUs offer way more flexibility than AMDs
                  I find that hard to believe.
                  You can see the CUDA architecture even supports Fortran, and you might wonder why anyone would care about that? Well it's still heavily used in research institutions and even some companies. Having a GPU that's capable of parallelizing this on a massive scale is a tremendous achievement.
                  If you mean that CUDA is far ahead I would believe you.
                  AMDs GCN architecture is unfortunately not advanced enough to compare compete with Nvidia's support, and could only implement a subset of such languages. I hope their replacement architecture (which should be out next year?) gets most of these features so both Nvidia and AMD can evolve it into a common open standard.
                  Do you have any sources for what AMDs GCN architecture lacks? I think that GCN 1.2 will be AMDs new architecture for their next high end cards. Why would you think Nvidia would suddenly work together with someone?
                  Provided AMD create such GPUs, they can use customize this CUDA compiler to fit their own GPU architecture. I think an evolved "CUDA architecture" would be the best basis for a next generation "OpenCL" and even "OpenGL", doing all the computation on the GPUs with minimal instructions from the CPU and driver. This would unlike Mantle be a step forward.
                  AMD using CUDA would mean that they would be at the mercy of Nvidia. How is mantle a step backwards?

                  You are extremely biased towards Nvidia. Nothing AMD does is good in your eyes and everything Nvidia does is the right thing to do. A lot of FUD and no evidence for your claims.

                  Comment


                  • #39
                    Originally posted by efikkan View Post
                    I'm talking of the programming capabilities of the hardware. To GPU programmers it's well known that Nvidia's GPUs offer way more flexibility than AMDs, currently no games utilize this, but professional compute customers leverage this a lot. It would be hard to make you understand the precise differences without intimate knowledge of programming languages. So I'll try an analogy instead: programming a GPU with flexible C/C++ compared to restrictive OpenCL would almost be like comparing programming a computers with punched cards compared to a flexible programming language like C. If you have a simple calculation OpenCL will be just as fast, but once you want to do more complex calculations, logic and data allocation, you need something more flexible.

                    You can see the CUDA architecture even supports Fortran, and you might wonder why anyone would care about that? Well it's still heavily used in research institutions and even some companies. Having a GPU that's capable of parallelizing this on a massive scale is a tremendous achievement.
                    Are you a even GPU programmer? Architecture wise, GPU (both AMD and NVidia) are not meant to use flexibility (dynamic memory allocation and dynamic instruction fetching simply suck on GPGPU). While CUDA compute >= 2.0 supports some of it, you should simply refrain from using it. And as such any application that is well suited to GPGPU will perform just the same on optimized Cuda on NVidia, or optimized OpenCL on both AMD and NVidia (well, if you can put up with the 1.1 support on NVidia).

                    You can of course use OpenCL with Fortran, as there are Fortran bindings.. The kernel though are still written in OpenCL. Unless you have a SPIR compiler, except the "more flexible NVidia GPUs" don't implement SPIR :/

                    Comment


                    • #40
                      Originally posted by efikkan View Post
                      You are swallowing AMD's PR bullshit raw. FreeSync and G-Sync are both implementations of Vesa AdaptiveSync, the only major difference is Nvidia incorporated an DP 1.3 feature without waiting for the standard.
                      For older monitors to support adaptive sync like G-Sync, they need to replace the driver circuits causing a premium price, like all early adapters of new technology. New high-end display circuits will have this support built-in without a steep premium, regardless if you are running Nvidia G-Sync, AMD FreeSync or upcoming Intel "WhateverSync".
                      You are so wrong it's funny. How much does Nvidia pay you?

                      G-sync is in no way "an implementation of Vesa AdaptiveSync". It cannot be, because Vesa AdaptiveSync is AMD's FreeSync - AMD pushed it to the standard. Further, if it were just an implementation of it, it would be compatible with it, which it isn't.

                      Sigh. The fanboys here.

                      Comment

                      Working...
                      X