Only seems like a step backwards to where we came from to me.
Announcement
Collapse
No announcement yet.
A Developer Hacked AMD's GCN GPUs To Run Custom Code Via OpenGL
Collapse
X
-
Leave it to the Phoronix community to find something wrong with everything. Seriously, this was just a cool project and the first thing you people feel the need to do is crap on his work. Though it's a little weird this guy seemed to do all of this on Windows, stuff like this is what Linux and open-source development is all about.
Comment
-
Originally posted by schmidtbag View PostLeave it to the Phoronix community to find something wrong with everything. Seriously, this was just a cool project and the first thing you people feel the need to do is crap on his work. Though it's a little weird this guy seemed to do all of this on Windows, stuff like this is what Linux and open-source development is all about.
Ideally, with CPU-GPU unified address space it should be possible to simply put the GPU binary code in a 4K page(s) and tell the Linux kernel to start a GPU thread from an address located in the 4K page. User-space access to execution of code on a GPU ought to be that simple. It ought to be possible to seamlessly link GPU code into executables and shared libraries, most likely living in a separate .gputext ELF section.
AMD and NVidia most likely won't deliver this simplicity by themselves because it seems to be in their interests not to do so.
It is a disaster and completely wrong that the author of the project had to resort to hacking just to run his binary code on the GPU!Last edited by Guest; 02 December 2015, 04:20 AM.
Comment
-
Originally posted by << ⚛ >> View PostIdeally, with CPU-GPU unified address space it should be possible to simply put the GPU binary code in a 4K page(s) and tell the Linux kernel to start a GPU thread from an address located in the 4K page. User-space access to execution of code on a GPU ought to be that simple. It ought to be possible to seamlessly link GPU code into executables and shared libraries, most likely living in a separate .gputext ELF section.
AMD and NVidia most likely won't deliver this simplicity by themselves because it seems to be in their interests not to do so.
It is a disaster and completely wrong that the author of the project had to resort to hacking just to run his binary code on the GPU![/B]
There is nothing wrong with doing that.
To put this in another perspective, people have managed to get Doom to play on TI graphing calculators. By your logic, that's a problem and something TI shouldn't allow. But all that shows is a demonstration of potential in the hardware - what's so bad about that?Last edited by schmidtbag; 01 December 2015, 01:32 PM.
Comment
-
Originally posted by << ⚛ >> View PostIdeally, with CPU-GPU unified address space it should be possible to simply put the GPU binary code in a 4K page(s) and tell the Linux kernel to start a GPU thread from an address located in the 4K page. User-space access to execution of code on a GPU ought to be that simple. It ought to be possible to seamlessly link GPU code into executables and shared libraries, most likely living in a separate .gputext ELF section.
As for your linking part, that would mean you have to not only build your program for each CPU ISA, but also each GPU ISA (of which there are way more, basically every different chip has their own) and you have to rebuild your system if you ever want to use a different GPU.
Also, please do not make all-bold posts. He who emphasizes everything, emphasizes nothing.
Comment
-
Originally posted by CrystalGamma View PostAs for your linking part, that would mean you have to not only build your program for each CPU ISA, but also each GPU ISA (of which there are way more, basically every different chip has their own) and you have to rebuild your system if you ever want to use a different GPU.
- Likes 1
Comment
-
Originally posted by CrystalGamma View Post
I think the dispatch part is pretty much what HSA is about.
As for your linking part, that would mean you have to not only build your program for each CPU ISA, but also each GPU ISA (of which there are way more, basically every different chip has their own) and you have to rebuild your system if you ever want to use a different GPU.
Also, please do not make all-bold posts. He who emphasizes everything, emphasizes nothing.
Adding a new section to elf is the easy part. It's everything else that is not.
Comment
Comment