Announcement

Collapse
No announcement yet.

In-Fighting Continues Over OpenACC In GCC

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Daktyl198 View Post
    I don't get the issue...?
    Because you haven't attempted to do so.

    NVidia are doing exactly what is ENCOURAGED in the Open Source community: doing what benefits you, that might benefit others.
    It rather vendor-locks people to Nidia and want GCC to be their tool to enforce vendor lock. Needless to say such approach is hardly welcome in opensource world most of time.

    They even said that the PTX thing is just a back-end, meaning that with the framework laid out for the OpenACC front-end, Intel and AMD could come along and write their own back-end to it and help with the front-end.
    However, "as is" their code only benefits Nvidia itself. Furthermore they managed to do it in most opesource-hostile way you can imagine.

    Let just compare 2 approaches:
    * AMD released "real" specs. The real instruction set of real hardware. Then everyone can implement whatever they want or need and use whereever it fits. What you need is some platform with PCE-E and some AMD card - even ARM and MIPS devices can do it. No assumption about system CPU architecture (let's remind Nvidia recently failed to build MIPS version of their "super-duper" driver and lost contract worth of about $500M due to this fact). Binary-only proprietary driver is not a real requirement to run code on GPU - there is enough data and code for those who want to implement whatever they want to on top of that instruction set (and some basic support is already in opensource drivers). Sure, writing or seriously rewriting GPU driver could be challenge. On other hand, it allows everyone to shape their future as they see it fits. That's where opensource shines.

    * Nvidia releases some "virtual" ISA which does not exists in real hardware. Furthermore, such ISA does not exists as opensource software implementation as well. The only thing which can execute resulting code is ... proprietary Nvidia driver. There are no any other implementation and provided data make it hard to to create another one. And at the end of day its just some IR (intermediate representation). Which would be useless without binary-only nvidia driver to convert it to actual GPU code. There is already load of various VMs with some intermediate virtual bytecodes. I really fail to see any sane reason to have another thing like this, especially when implementation originates from proprietary vendor who is inclined on vendor-locking anyone to their binary-only drivers.

    So what if PTX isn't Open? If you don't like it, write an Intel/AMD back-end to it and use their cards. In the meantime, stop getting the the way of *nix users who want this feature on their nvidia cards.
    Perfectly fine for me... as long as Nvidia GTFOs and maintains their own fork of gcc. I see no reason why this locked down sh*t should me mainlined. Devs of opensource software are not supposed to be Nvidia's personal footpads. And yes, its really amazing how some morons could come to someont's else playground, attempt to establish own rules and be surprised when they slapped in the face (like Torvalds showing them f...k).

    Comment


    • #22
      Originally posted by 0xBADCODE View Post
      It rather vendor-locks people to Nidia
      What other vendor is selling CUDA-capable hardware?

      Comment


      • #23
        Originally posted by Marc Driftmeyer View Post
        FWIW: You don't see this crap by Nvidia being pulled with LLVM/Clang community because they don't have jack squat influence over the direction of the project(s).
        The only 'crap' is that which is coming out of you, LLVM have already added a NVidia supplied PTX backend. Your Apple 'reality distortion field' is indeed strong.

        Comment


        • #24
          Originally posted by 0xBADCODE View Post
          It rather vendor-locks people to Nidia and want GCC to be their tool to enforce vendor lock.
          Obviously they want to make their hardware more attractive to buyers by having GCC support their solution, but that is not 'vendor lock in' anymore than any hardware manufacturer adding GCC support for their CPU architectures is 'vendor-lock in'.

          Originally posted by 0xBADCODE View Post
          However, "as is" their code only benefits Nvidia itself. Furthermore they managed to do it in most opesource-hostile way you can imagine.
          It benefits users of NVidia cards, just like vendor X adding support for their hardware architecture Y benefits users of that hardware and makes it more attractive to said users.

          Originally posted by 0xBADCODE View Post
          Let just compare 2 approaches:
          * AMD released "real" specs. The real instruction set of real hardware. Then everyone can implement whatever they want or need and use whereever it fits.
          I certainly find a well-documented and fully open specification superior, and again I have no interest in Nvidia's proprietary solutions, but then again I can't expect them to pay for code being written to support OpenACC for other vendors cards. They are paying to have OpenACC implemented in GCC, and they are also paying to have it generate PTX code. Now anyone else, including AMD can choose to either add support for the OpenACC implementation to generate code which works with their cards, or less likely I suppose, support PTX in their card drivers.

          Either way, GCC gets an OpenACC implementation which NVidia pays for, which can then be extended to support any card. Of course NVidia will not pay for the latter, they pay for OpenACC and it being able to generate PTX code, selfish? not really in my opinion. However for me personally to have any interest in PTX it would have to be supported across GPU vendors and not just on NVidia cards, and here it seems NVidia is guilty of 'pushing yet another NVidia-only solution with no intention of cooperation with other vendors'.

          But how someone can try to portray NVidia only adding support for their PTX solution through OpenACC is equal to them 'buying' GCC is just ridicoulus. If other vendors want to support their card through OpenACC (which NVidia is paying to have implemented) they can add such support, and hopefully they will (and that their solutions are well documented and thus supportable across vendors).

          Originally posted by 0xBADCODE View Post
          Devs of opensource software are not supposed to be Nvidia's personal footpads. And yes, its really amazing how some morons could come to someont's else playground, attempt to establish own rules and be surprised when they slapped in the face (like Torvalds showing them f...k).
          Huh? NVidia is paying CodeSourcery (loooooong time contributors and maintainers of GCC code) to implement and maintain OpenACC and PTX in GCC.

          Comment


          • #25
            You're not grasping my statement. That is not mainline llvm/clang.

            Comment


            • #26
              Originally posted by XorEaxEax View Post
              The only 'crap' is that which is coming out of you, LLVM have already added a NVidia supplied PTX backend. Your Apple 'reality distortion field' is indeed strong.
              No they have not.

              Libclc is not added, but is an optional project add-on for the PTX interface only that leverages the OpenCL standard support in LLVM. It is not part of Trunk nor will it be added to it.

              There was an attempt and it was shut down as the goal of LLVM/Clang and it's OpenCL support was to directly target the OpenCL specs by Khronos. Vendor specific is welcome out-of-branch.

              Comment


              • #27
                Originally posted by Marc Driftmeyer View Post
                No they have not.

                Libclc is not added, but is an optional project add-on for the PTX interface only that leverages the OpenCL standard support in LLVM. It is not part of Trunk nor will it be added to it.
                The NVidia supplied PTX backend was introduced as a major new feature of LLVM 3.2, under the name NVPTX, here is the documentation: http://llvm.org/docs/NVPTXUsage.html

                And OpenACC is purely optional in GCC, both in terms of using it if it is enabled, aswell as being optionally enabled in the build to begin with. So really, what 'crap' has been pulled with GCC which would 'never be pulled with Clang/LLVM' ?

                Comment


                • #28
                  Originally posted by phoronix View Post
                  Phoronix: In-Fighting Continues Over OpenACC In GCC

                  I will happily benchmark them and I am a happy user of NVIDIA's binary drivers because they are simply the best right now for features, performance, and reliability.

                  http://www.phoronix.com/vr.php?view=MTUyMjA
                  Reliability, that's the very reason I'm using Nouveau instead of unknowable, unauditable, unmodifiable blobs.
                  Did you not learn from this: http://www.h-online.com/open/news/it...x-1658318.html ?

                  Comment


                  • #29
                    Originally posted by johnc View Post
                    What other vendor is selling CUDA-capable hardware?
                    So now we can see: "Open"ACC is all about closed-source driver and proprietary CUDA, just rebranded, isn't it? Looks like futile rebadge attempt and declaring it as something "open". And if I would talk about something really open, I would re-phrase it this way: which vendor sells GPGPU capable hardware? Then it would sound sane (in sense if someone about making some open standard rather than just padding own *proprietary* crap). If nvidia made their cuda so deeply hardwired to their hardware, it's their fault. Something pretending to be more or less generic standard have to actually be generic enough so more than 1 vendor can implement it. Good example is OpenCL: backed up by committee, multiple vendors can implelent it, so while idea originated mostly from AMD, Nvidia has been able to implement it as well. There is also some implementation of OpenCL for some Intel hardware, too. That's how standards are created in proper ways. Standards and open interfaces are not meant to pad interests of particular vendor exclusively and calling something like nvidia's implementation "open" is a misnomer. Just some stupid marketing bullshit from Nvidia.

                    Originally posted by XorEaxEax View Post
                    Obviously they want to make their hardware more attractive to buyers by having GCC support their solution,
                    They may want whatever they can. Does not forces anyone else to obey or share their views "automatically". Software devs in general (including many GCC devs I guess) aren't related to nvidia and have reasons to discourage such approaches in opensource software. And its not a "GCC support of their hardware". Its some weird abstract intermediate representation for their proprietary driver. It haves zero value without piece of blob and benefits nobody but nvidia. So I think it's "politically incorrect" to try to make some non-nvidia people spending their time on dealing with this chunk of PROPRIETARY stuff.

                    but that is not 'vendor lock in' anymore than any hardware manufacturer adding GCC support for their CPU architectures is 'vendor-lock in'.
                    I do not mind if GCC would be able to generate native GPU code for nvidia, if it can be launched by some opensource uploader/driver. However, some intermediate virtual crap which is further processed by proprietary blob is not "CPU architecture" at all. Its just some intermediate interface to binary-only driver.

                    It benefits users of NVidia cards, just like vendor X adding support for their hardware architecture Y benefits users of that hardware and makes it more attractive to said users.
                    Again, stop your lies. NVPTX is not a "hardware architecture" - just a mere interface to binary-only driver. Really useless without this binary-only driver (which is what warrants vendor-lock). And btw, these days there are quite few morons who dares to insist their piece of hardware absolutely needs some huge blob-only driver. Nvidia is unpleasant exception in this regard. For some reasons they seems to be retarded enough to fail to get idea even after facing very direct feedback like infamous "f...k you, Nvidia".

                    I certainly find a well-documented and fully open specification superior, and again I have no interest in Nvidia's proprietary solutions,
                    The only problem here is that Nvidia ... lacks specifications for their hardware. They recently published some small bits, but in no way its anyhow complete. Just compare to what AMD or Intel publushed and feel the difference. NVPTX is not a hardware interface at all, just another dumb intermediate VM abstraction, and the only implementation of this interface on the whole planet is nvidia blob-only driver. If that's what nvidia calls "open" I'm rather admit it's some stupid marketing bullshit in hope to use "open" buzzword to promote nv's proprietary crap.

                    but then again I can't expect them to pay for code being written to support OpenACC for other vendors cards. They are paying to have OpenACC implemented in GCC, and they are also paying to have it generate PTX code.
                    That's up to them. The only thing is that such approach haves nothing to do with openness. Hence, looks like some marketing bullshit from nv.

                    Now anyone else, including AMD can choose to either add support for the OpenACC implementation to generate code which works with their cards, or less likely I suppose, support PTX in their card drivers.
                    And why someone except nv have to put their efforts to maintain nv's crap? When they're put at disadvantage at start. I can admit "Open"ACC had a really closed and vendor-locked start. So I'm rather about to consider this as dumb cuda rebadging.

                    Either way, GCC gets an OpenACC implementation which NVidia pays for, which can then be extended to support any card.
                    The only issue is that nobody except Nvidia not benefits from this move and then there're already a dozen and half of competing accelerated/parallel computation standards. So for GCC and anyone who it not Nvidia it's hardly counts as any benefit, especially when implemented like this.

                    Of course NVidia will not pay for the latter, they pay for OpenACC and it being able to generate PTX code, selfish? not really in my opinion.
                    IMO if someone about to make something "open", they should stick to more cooperative approaches.

                    NVidia is guilty of 'pushing yet another NVidia-only solution with no intention of cooperation with other vendors'.
                    Something like this. They developed something behind closed doors. The only implementation which exists needs huge blob which would postprocess generated "code". And after all this they dare to call this crap "open"? Marketing bullshit for sure.

                    Comment


                    • #30
                      Originally posted by IsacDaavid View Post
                      Reliability, that's the very reason I'm using Nouveau instead of unknowable, unauditable, unmodifiable blobs.
                      ...not to mention the fact NV's foreign blob failed to work with newer kernels for a while. Some people have really interesting views on "reliability". When nv's kernel module does not builds with 3.12 kernel, does it counts as "reliable"? Reliable ... FAIL?

                      Comment

                      Working...
                      X