Announcement

Collapse
No announcement yet.

A Developer Hacked AMD's GCN GPUs To Run Custom Code Via OpenGL

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Only seems like a step backwards to where we came from to me.

    Comment


    • #22
      Leave it to the Phoronix community to find something wrong with everything. Seriously, this was just a cool project and the first thing you people feel the need to do is crap on his work. Though it's a little weird this guy seemed to do all of this on Windows, stuff like this is what Linux and open-source development is all about.

      Comment


      • #23
        Originally posted by schmidtbag View Post
        Leave it to the Phoronix community to find something wrong with everything. Seriously, this was just a cool project and the first thing you people feel the need to do is crap on his work. Though it's a little weird this guy seemed to do all of this on Windows, stuff like this is what Linux and open-source development is all about.
        The whole concept of accessing GPUs is wrong today.

        Ideally, with CPU-GPU unified address space it should be possible to simply put the GPU binary code in a 4K page(s) and tell the Linux kernel to start a GPU thread from an address located in the 4K page. User-space access to execution of code on a GPU ought to be that simple. It ought to be possible to seamlessly link GPU code into executables and shared libraries, most likely living in a separate .gputext ELF section.

        AMD and NVidia most likely won't deliver this simplicity by themselves because it seems to be in their interests not to do so.

        It is a disaster and completely wrong that the author of the project had to resort to hacking just to run his binary code on the GPU!
        Last edited by Guest; 02 December 2015, 04:20 AM.

        Comment


        • #24
          Originally posted by << ⚛ >> View Post
          Ideally, with CPU-GPU unified address space it should be possible to simply put the GPU binary code in a 4K page(s) and tell the Linux kernel to start a GPU thread from an address located in the 4K page. User-space access to execution of code on a GPU ought to be that simple. It ought to be possible to seamlessly link GPU code into executables and shared libraries, most likely living in a separate .gputext ELF section.
          Ideally, or in actuality? Ought to be, or will be? Because if what you said isn't done in practice and isn't planned then it is irrelevant. Tomasz seemed to be impatient for things like Vulkan and just wanted to see if what he noticed was even possible; and it was.

          AMD and NVidia most likely won't deliver this simplicity by themselves because it seems to be in their interests not to do so.

          It is a disaster and completely wrong that the author of the project had to resort to hacking just to run his binary code on the GPU![/B]
          Who said AMD or Nvidia were going to utilize this? Both companies are well aware of what their hardware can do and how it can be better utilized. I don't suspect any major software company would distribute software that uses this method either. You're completely missing the point here. Tomasz isn't saying "hey, check out this reverse-engineered driver hack I figured out specific to GCN! It completely makes openCL obsolete!" but rather "hey check out this interesting way to tap into your GPU's potential!"
          There is nothing wrong with doing that.


          To put this in another perspective, people have managed to get Doom to play on TI graphing calculators. By your logic, that's a problem and something TI shouldn't allow. But all that shows is a demonstration of potential in the hardware - what's so bad about that?
          Last edited by schmidtbag; 01 December 2015, 01:32 PM.

          Comment


          • #25
            Originally posted by << ⚛ >> View Post
            Ideally, with CPU-GPU unified address space it should be possible to simply put the GPU binary code in a 4K page(s) and tell the Linux kernel to start a GPU thread from an address located in the 4K page. User-space access to execution of code on a GPU ought to be that simple. It ought to be possible to seamlessly link GPU code into executables and shared libraries, most likely living in a separate .gputext ELF section.
            I think the dispatch part is pretty much what HSA is about.
            As for your linking part, that would mean you have to not only build your program for each CPU ISA, but also each GPU ISA (of which there are way more, basically every different chip has their own) and you have to rebuild your system if you ever want to use a different GPU.

            Also, please do not make all-bold posts. He who emphasizes everything, emphasizes nothing.

            Comment


            • #26
              On free software we have free and direct access to HW back-ends, so lets move on, nothing here.

              Comment


              • #27
                Originally posted by CrystalGamma View Post
                As for your linking part, that would mean you have to not only build your program for each CPU ISA, but also each GPU ISA (of which there are way more, basically every different chip has their own) and you have to rebuild your system if you ever want to use a different GPU.
                Sounds great for Gentoo users!

                Comment


                • #28
                  Originally posted by CrystalGamma View Post

                  I think the dispatch part is pretty much what HSA is about.
                  As for your linking part, that would mean you have to not only build your program for each CPU ISA, but also each GPU ISA (of which there are way more, basically every different chip has their own) and you have to rebuild your system if you ever want to use a different GPU.

                  Also, please do not make all-bold posts. He who emphasizes everything, emphasizes nothing.
                  ISA is not the only issues here, the whole concept of GPU thread does not exist and POSIX thread is not a good fit. You do not want 10000 fake POSIX thread to account for each single GPU thread ? What to do when a SEGFAULT happen on the GPU ? How to allow gdb to become aware of GPU ? Should it handle GPU ISA directly ? Or some generic one ? ....

                  Adding a new section to elf is the easy part. It's everything else that is not.

                  Comment

                  Working...
                  X