Announcement

Collapse
No announcement yet.

AMD Releases OpenCL ATI GPU Support For Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    So what?

    "ATI releases OpenCL ATI GPU support for Linux"

    So what? What does this mean? What's OpenCL good for? What applications might start to take advantage of this? Why should I care?

    That's what I'd really like to read in the article.

    Comment


    • #22
      So ATI purposely screws over it's previous generation yet again. First by not providing stable drivers, now by providing OpenCL support only to the newest cards. ATI's OpenCL implementation should have provided support for all graphics cards that they support in their drivers. Oh well, yet another reason why AMD doesn't deserve any more of my support or money.

      Comment


      • #23
        Originally posted by alazyworkaholic View Post
        "ATI releases OpenCL ATI GPU support for Linux"

        So what? What does this mean? What's OpenCL good for? What applications might start to take advantage of this? Why should I care? That's what I'd really like to read in the article.
        If you follow the "AMD Developer Central" link Michael provided, and scroll down to "Related Resources", you will see a number of links providing overview information for OpenCL.

        Originally posted by LavosPhoenix View Post
        So ATI purposely screws over it's previous generation yet again. First by not providing stable drivers, now by providing OpenCL support only to the newest cards. ATI's OpenCL implementation should have provided support for all graphics cards that they support in their drivers. Oh well, yet another reason why AMD doesn't deserve any more of my support or money.
        I believe the issue here is that OpenCL (an industry standard developed in 2008 to provide an open standard for computing in 2010 and beyond) requires hardware capabilities which were not included in our 2006 and 2007 GPUs in order to be fully compliant. It's probably possible to do some kind of subset implementation, and I'm sure the open source drivers will implement one anyways, but right now I don't know how useful an implementation without the local and global data share memories would be.

        I guess I'd better apologize for DX11 right now before things get any worse
        Last edited by bridgman; 14 October 2009, 12:22 PM.
        Test signature

        Comment


        • #24
          Originally posted by LavosPhoenix View Post
          So ATI purposely screws over it's previous generation yet again. First by not providing stable drivers, now by providing OpenCL support only to the newest cards. ATI's OpenCL implementation should have provided support for all graphics cards that they support in their drivers. Oh well, yet another reason why AMD doesn't deserve any more of my support or money.
          Did it ever cross your mind that those GPUs may not support the hardware features necessary for OpenCL in the first place? No, probably not.

          Comment


          • #25
            I really don't buy into 'c++ in hardware'. Are you guys sure you don't misinterpret something? Something like 'the cuda compiler can eat c++ and turn it into the instruction stream needed by the card'?

            Comment


            • #26
              Originally posted by energyman View Post
              I really don't buy into 'c++ in hardware'. Are you guys sure you don't misinterpret something? Something like 'the cuda compiler can eat c++ and turn it into the instruction stream needed by the card'?
              Well, that would legitimately be "C++ on GPU", but the notion of that somehow being tied to "real applications" running on the GPU makes no sense to me. No matter what language a GPU-targeted compiler supports, a GPU is not going to have the general-purpose libraries, OS facilities, or I/O capabilities demanded by "real applications". To the extent that applications use any kind of C++-on-GPU capability, I don't see how it would be any more or less "real" than using OpenCL, Cg, or whatever.

              Comment


              • #27
                Originally posted by energyman View Post
                I really don't buy into 'c++ in hardware'. Are you guys sure you don't misinterpret something? Something like 'the cuda compiler can eat c++ and turn it into the instruction stream needed by the card'?
                What NVidia is saying is that certain C++ features weren't possible in previous generations because the hardware lacked the necessary support. See this anandtech article for some more info: http://www.anandtech.com/video/showdoc.aspx?i=3651&p=6

                In previous architectures there was a different load instruction depending on the type of memory: local (per thread), shared (per group of threads) or global (per kernel). This created issues with pointers and generally made a mess that programmers had to clean up.

                Fermi unifies the address space so that there's only one instruction and the address of the memory is what determines where it's stored. The lowest bits are for local memory, the next set is for shared and then the remainder of the address space is global.

                The unified address space is apparently necessary to enable C++ support for NVIDIA GPUs, which Fermi is designed to do.
                Fermi implements a wide set of changes to its ISA, primarily designed at enabling C++ support. Virtual functions, new/delete, try/catch are all parts of C++ and enabled on Fermi.

                Comment


                • #28
                  This is great that AMD/ATI is supporting linux, but i assume this is proprietary. Is there an open source alternative, eiher as an independent package or with xorg-ati? Is there any aditional documentation needed for this ?

                  I wonder how impossible it would be for this to work on older cards, and if an open source driver could take advantage of this.

                  matt

                  Comment


                  • #29
                    Originally posted by smitty3268 View Post
                    What NVidia is saying is that certain C++ features weren't possible in previous generations because the hardware lacked the necessary support. See this anandtech article for some more info: http://www.anandtech.com/video/showdoc.aspx?i=3651&p=6
                    yes, I read that. Sounds like 'you will be able to use c++ to program for the gpu' and not 'you can write c++ code and the card will eat it directly'.

                    Comment


                    • #30
                      Yep. That's probably for the best anyway - it's a really bad idea to just run normal c++ application code on a GPU, because it's almost certainly going to be mostly single-threaded and run much faster on a normal CPU. Where I see this being useful is for things like C++ matrix libraries, etc., and I imagine even then you'll have to write code specifically for the hardware in order to get any kind of decent performance, just like you do with CUDA now.

                      Comment

                      Working...
                      X