Announcement

Collapse
No announcement yet.

A Developer Hacked AMD's GCN GPUs To Run Custom Code Via OpenGL

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • glisse
    replied
    Originally posted by CrystalGamma View Post

    I think the dispatch part is pretty much what HSA is about.
    As for your linking part, that would mean you have to not only build your program for each CPU ISA, but also each GPU ISA (of which there are way more, basically every different chip has their own) and you have to rebuild your system if you ever want to use a different GPU.

    Also, please do not make all-bold posts. He who emphasizes everything, emphasizes nothing.
    ISA is not the only issues here, the whole concept of GPU thread does not exist and POSIX thread is not a good fit. You do not want 10000 fake POSIX thread to account for each single GPU thread ? What to do when a SEGFAULT happen on the GPU ? How to allow gdb to become aware of GPU ? Should it handle GPU ISA directly ? Or some generic one ? ....

    Adding a new section to elf is the easy part. It's everything else that is not.

    Leave a comment:


  • GreatEmerald
    replied
    Originally posted by CrystalGamma View Post
    As for your linking part, that would mean you have to not only build your program for each CPU ISA, but also each GPU ISA (of which there are way more, basically every different chip has their own) and you have to rebuild your system if you ever want to use a different GPU.
    Sounds great for Gentoo users!

    Leave a comment:


  • artivision
    replied
    On free software we have free and direct access to HW back-ends, so lets move on, nothing here.

    Leave a comment:


  • CrystalGamma
    replied
    Originally posted by << ⚛ >> View Post
    Ideally, with CPU-GPU unified address space it should be possible to simply put the GPU binary code in a 4K page(s) and tell the Linux kernel to start a GPU thread from an address located in the 4K page. User-space access to execution of code on a GPU ought to be that simple. It ought to be possible to seamlessly link GPU code into executables and shared libraries, most likely living in a separate .gputext ELF section.
    I think the dispatch part is pretty much what HSA is about.
    As for your linking part, that would mean you have to not only build your program for each CPU ISA, but also each GPU ISA (of which there are way more, basically every different chip has their own) and you have to rebuild your system if you ever want to use a different GPU.

    Also, please do not make all-bold posts. He who emphasizes everything, emphasizes nothing.

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by << ⚛ >> View Post
    Ideally, with CPU-GPU unified address space it should be possible to simply put the GPU binary code in a 4K page(s) and tell the Linux kernel to start a GPU thread from an address located in the 4K page. User-space access to execution of code on a GPU ought to be that simple. It ought to be possible to seamlessly link GPU code into executables and shared libraries, most likely living in a separate .gputext ELF section.
    Ideally, or in actuality? Ought to be, or will be? Because if what you said isn't done in practice and isn't planned then it is irrelevant. Tomasz seemed to be impatient for things like Vulkan and just wanted to see if what he noticed was even possible; and it was.

    AMD and NVidia most likely won't deliver this simplicity by themselves because it seems to be in their interests not to do so.

    It is a disaster and completely wrong that the author of the project had to resort to hacking just to run his binary code on the GPU![/B]
    Who said AMD or Nvidia were going to utilize this? Both companies are well aware of what their hardware can do and how it can be better utilized. I don't suspect any major software company would distribute software that uses this method either. You're completely missing the point here. Tomasz isn't saying "hey, check out this reverse-engineered driver hack I figured out specific to GCN! It completely makes openCL obsolete!" but rather "hey check out this interesting way to tap into your GPU's potential!"
    There is nothing wrong with doing that.


    To put this in another perspective, people have managed to get Doom to play on TI graphing calculators. By your logic, that's a problem and something TI shouldn't allow. But all that shows is a demonstration of potential in the hardware - what's so bad about that?
    Last edited by schmidtbag; 12-01-2015, 01:32 PM.

    Leave a comment:


  • atomsymbol
    replied
    Originally posted by schmidtbag View Post
    Leave it to the Phoronix community to find something wrong with everything. Seriously, this was just a cool project and the first thing you people feel the need to do is crap on his work. Though it's a little weird this guy seemed to do all of this on Windows, stuff like this is what Linux and open-source development is all about.
    The whole concept of accessing GPUs is wrong today.

    Ideally, with CPU-GPU unified address space it should be possible to simply put the GPU binary code in a 4K page(s) and tell the Linux kernel to start a GPU thread from an address located in the 4K page. User-space access to execution of code on a GPU ought to be that simple. It ought to be possible to seamlessly link GPU code into executables and shared libraries, most likely living in a separate .gputext ELF section.

    AMD and NVidia most likely won't deliver this simplicity by themselves because it seems to be in their interests not to do so.

    It is a disaster and completely wrong that the author of the project had to resort to hacking just to run his binary code on the GPU!
    Last edited by atomsymbol; 12-02-2015, 04:20 AM.

    Leave a comment:


  • schmidtbag
    replied
    Leave it to the Phoronix community to find something wrong with everything. Seriously, this was just a cool project and the first thing you people feel the need to do is crap on his work. Though it's a little weird this guy seemed to do all of this on Windows, stuff like this is what Linux and open-source development is all about.

    Leave a comment:


  • swoorup
    replied
    Only seems like a step backwards to where we came from to me.

    Leave a comment:


  • chithanh
    replied
    Originally posted by dungeon View Post
    GPU based OS should be if you have working fully functional OS kernel on the GPU, instead of the CPU.... but i don't think there is the one
    Raspberry Pi Firmware which runs on the VideoCore IV comes close. It is all proprietary though.

    Leave a comment:


  • Emmanuel Deloget
    replied
    Originally posted by OneTimeShot View Post
    Presumably the byte stream verifier in the Kernel should prevent anything malicious being loaded from userspace (?) A moderately amusing hack while waiting for Vulcan that will basically support this type of thing officially, I suppose...
    I assumed SPIR-V was supposed to be the official bytecode that has to be understood by drivers - you still don't cope with the vendor ISA in the Vulkan world (and I don't know how one prevent a malicious byte stream to be sent to the GPU without being extremely invasive and inefficient for legitimate uses ; I might be wrong, of course).

    Leave a comment:

Working...
X