Announcement

Collapse
No announcement yet.

AMD Radeon GCN Offloading Support For OpenMP/OpenACC On The Way For GCC 10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    wizard69
    DrYak

    ​​​​​​​
    Originally posted by karolherbst View Post
    ...
    I can just give this advice to every application developer who wants their software to run on normal desktop machines: ...
    I explicitly stated towards which developers my remark was made. I am well aware, that for "internal usage" you can always compile against your target machine... but what about closed source software, where you have no control over that? Maybe there is awesome software only compiled against X, but you want to run it on Y....

    Of course there is this "niche" area where you can still use this fancy OpenMP GPU offloading, but for normal customer machines it is useless. And the idea behind openmp is indeed quite nice, but... not applicable for Linux desktop developers, where it could indeed be super beneficial for performance and power efficiency.

    Comment


    • #12
      ohh, and don't get me started on how all of that makes it harder for small players to even provide an OpenMP implementation now... What do you think how many of the current open source drivers will provide an implementation?

      Comment


      • #13
        Originally posted by karolherbst View Post
        wizard69
        DrYak

        ​​​​​​I explicitly stated towards which developers my remark was made. I am well aware, that for "internal usage" you can always compile against your target machine... but what about closed source software, where you have no control over that? Maybe there is awesome software only compiled against X, but you want to run it on Y....

        Of course there is this "niche" area where you can still use this fancy OpenMP GPU offloading, but for normal customer machines it is useless. And the idea behind openmp is indeed quite nice, but... not applicable for Linux desktop developers, where it could indeed be super beneficial for performance and power efficiency.
        In Nvidia land this works just fine for all GPU architectures as offload backend is JIT'ing functions for target GPU.

        Comment


        • #14
          Originally posted by karolherbst View Post
          I explicitly stated towards which developers my remark was made. I am well aware, that for "internal usage" you can always compile against your target machine... but what about closed source software, where you have no control over that? Maybe there is awesome software only compiled against X, but you want to run it on Y....
          Again, I'm not just trying to prove you wrong for the sake of contradictory arguments.
          I'm just trying to get you to step back a bit and look at the global picture.

          Who is going to benefit the most from hardware offloading?
          The single most frequent application is going to be number crunching, which is going to be extremely custom tailored jobs. (I know something about it, that's literally my job in medical research, I literally have a Terminal window opened next to this browser trying to understand why a peculiar job is crashing)
          After that it's probably going to be running deep neural nets for smaller actors who aren't Facebook/Google/etc. and who don't have dedicated specialized silicon and will offload to GPGPU.
          etc.

          Much further down the list you start to have the small niche use-case that would be affecting home/SOHO user who would need hardware offloading for their own use.
          Even there, you'll going to have a mix of rather targeted hardware (e.g.: 3D rendering software mostly targeted toward the few most likely configurations found on nodes inside render farm - "Buy Nvidia model xyzs Pro Ti" is going to be a common requirement for most of these situations anyway, so boiling down to 2-3 common architecture is going to be common), tasks handled by other more API (rendering and movie editing is probably going to rely heavily on OpenGL/Vulkan and various Video hardware API too), etc.

          There are very few reasons *nowadays* for end-users to run random download apps that require hardware-offloading and needs to target every single last weird pieces of hardware out-there including whatever thing happens to be in their GPUs.
          By the time (months to years) such situation appear (let me guess: betther face-swap and face-filters that actually run the neural net on the smartphone) OpenAcc are probably going to get their SPIR-V shit together in line.

          But in the immediate future, for the largest fraction, OpenAcc is going to by mostly used in number crunching scenarios.

          TL;DR: Don't expect an "OpenAcc-powered" edition of Photoshop or GIMP before 2020. Expect SPIR-V support by then.

          Or in otherwords: the absence of a portable byte-code isn't going to be a hindrance for the short period of time where architecture-specific binaries are the only solution.

          Comment

          Working...
          X