Announcement

Collapse
No announcement yet.

NVIDIA Developers Express Interest In Helping Out libc++/libstdc++ Parallel Algorithms

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA Developers Express Interest In Helping Out libc++/libstdc++ Parallel Algorithms

    Phoronix: NVIDIA Developers Express Interest In Helping Out libc++/libstdc++ Parallel Algorithms

    NVIDIA developers have expressed interest in helping the open-source GCC libstdc++ and LLVM Clang libc++ standard libraries in bringing up support for the standardized parallel algorithms...

    http://www.phoronix.com/scan.php?pag...Parallel-Algos

  • #2
    This is good news. Lets hope they follow through, and continue on to to fix their graphics stack so we can all move on and forget about eglstreams.

    Comment


    • #3
      I think parallel STL is probably a waste of time & resources. I think it's likely to suffer from too much data movement and synchronization.

      IMO, you really want GPU support to be built right into the language, so the compiler has free reign and full visibility to optimize such aspects.

      Comment


      • #4
        Originally posted by coder View Post
        I think parallel STL is probably a waste of time & resources. I think it's likely to suffer from too much data movement and synchronization.

        IMO, you really want GPU support to be built right into the language, so the compiler has free reign and full visibility to optimize such aspects.
        I'm not fan of STL but you know that std::move && friends in modern C++ allow to cut unnecessary movement?

        Comment


        • #5
          Originally posted by coder View Post
          I think parallel STL is probably a waste of time & resources. I think it's likely to suffer from too much data movement and synchronization.

          IMO, you really want GPU support to be built right into the language, so the compiler has free reign and full visibility to optimize such aspects.
          Pretty sure you can compile C++ for a GPU these days?
          And support for parallel programming right in the language & standard lib is where you want it...

          Comment


          • #6
            Originally posted by Happy Heyoka View Post
            Pretty sure you can compile C++ for a GPU these days?
            Not exactly. LLVM has a backend for AMD GPUs and I remember reading that HSA has all the necessary support for full C++. But I'm not aware of any way to compile normal C++ to run entirely on a GPU.

            Parallel STL does not mean running entire programs on the GPU (though doesn't exclude it, either). It's mainly intended as a library for dispatching operations to multiple cores or GPU-class compute accelerators.

            Originally posted by Happy Heyoka View Post
            And support for parallel programming right in the language & standard lib is where you want it...
            I guess the easiest way to answer is to say the devil is in the details.

            The compiler needs to understand enough about the computation and data flow that it doesn't over-synchronize on the intermediates or move data between the CPU and GPU needlessly. Basically, you want to dispatch a large sequence of fairly heavy-weight computation to the GPU, and not synchronize on anything but the final result. I haven't seen much of parallel STL, but I'm skeptical it will be crafted to support sort of chaining-together of smaller pipeline stages that would be necessary to achieve good efficiency.

            Comment


            • #7
              Originally posted by coder View Post
              But I'm not aware of any way to compile normal C++ to run entirely on a GPU.

              Parallel STL does not mean running entire programs on the GPU (though doesn't exclude it, either).
              I can't think of any reason why you'd want to run the whole thing on a GPU - you brought it up...

              Originally posted by coder View Post
              The compiler needs to understand enough about the computation and data flow that it doesn't over-synchronize on the intermediates or move data between the CPU and GPU needlessly. [..snip..] I haven't seen much of parallel STL, but I'm skeptical it will be crafted to support sort of chaining-together of smaller pipeline stages that would be necessary to achieve good efficiency.
              Not trying to be rude but efficiency is your job, not the compilers... you can use the current standardlib/STL to write great code or awful code. Twas ever thus.

              What is interesting about this is that it moves the tools for doing parallel code "one layer deeper"; in theory if it has a C++ standard 0x?? compiler then I have parallelism options available to me; before now I needed whatever parallel library to be ported to the platform. I'm sure it'll be rudimentary but even a lowest common denominator solution removes a whole lot of extra work on my part.

              Comment


              • #8
                Originally posted by Happy Heyoka View Post
                I can't think of any reason why you'd want to run the whole thing on a GPU - you brought it up...
                No, that's what I thought you were getting at.

                Originally posted by Happy Heyoka View Post
                Not trying to be rude but efficiency is your job, not the compilers...
                Well, it's ultimately my job, so I need to use the right tools and in the right way to achieve that end. The compiler is certainly a piece in the puzzle, but so are libraries and frameworks.

                Originally posted by Happy Heyoka View Post
                you can use the current standardlib/STL to write great code or awful code. Twas ever thus.
                There are better and worse ways to use it. Sometimes, you can't use it at all, in a particular piece of code, if you need really good performance.

                Comment

                Working...
                X