Announcement

Collapse
No announcement yet.

OpenCL Is Coming To The GIMP Via GEGL

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • OpenCL Is Coming To The GIMP Via GEGL

    Phoronix: OpenCL Is Coming To The GIMP Via GEGL

    Outside of the direct X.Org / Mesa / Linux work being done this year as part of Google's Summer of Code, one of the more interesting projects is work by a student developer with GIMP who is bringing OpenCL support to the graphics program's GEGL image library...

    http://www.phoronix.com/vr.php?view=OTc5OQ

  • #2
    Originally posted by phoronix View Post
    483 milliseconds was needed when on the NVIDIA GPU in OpenCL while it took 526 milliseconds on the CPU without OpenCL. Most of the 483 milliseconds was spent transferring data to/from the GPU memory
    this means if you have a realy fast cpu the cpu will be faster without the GPU!

    only because "Most of the 483 milliseconds was spent transferring data to/from the GPU memory"

    right now i do have turn of GPU acceleration in firefox6 and flash11 only because i think my phenomII b50 X4 3,8ghz is overall faster as my passiv cooled hd4670...

    i get massiv mouse input lag in heroes of newerth if i use any GPU acceleration outside the game.

    Comment


    • #3
      Originally posted by Qaridarium View Post
      this means if you have a realy fast cpu the cpu will be faster without the GPU!
      If one uses PBOs instead of buffer arrays then it should be faster and/or more energy effective because you don't have to transfer data back and forth.
      Also, upgrading the GPU also improves performance, when I upgraded from 9600gt to gtx 560Ti the PBO read/write performance in my little test went up like 4 times!

      So it's really mostly up to the quality of the source code and the solutions it uses.
      Even if both the CPU and GPU solutions run equally fast (that is you have a newer CPU and older GPU) you should still use the GPU solution because it saves energy by doing less I/O.

      But then there's still some folks with old hw that doesn't support PBOs yet (though it's a shame nowadays to not support PBOs) and some crappy drivers maybe.

      Comment


      • #4
        Originally posted by Qaridarium View Post
        this means if you have a realy fast cpu the cpu will be faster without the GPU!

        only because "Most of the 483 milliseconds was spent transferring data to/from the GPU memory"
        The idea is to copy the data once, to a let a whole lot of filters do their thing and only then copy back. The copying back and forth to the GPU will always be the bottleneck. This benchmark gives the worst case scenario.

        Comment


        • #5
          Blah blah blah... but what about 16bit/channel?
          ## VGA ##
          AMD: X1950XTX, HD3870, HD5870
          Intel: GMA45, HD3000 (Core i5 2500K)

          Comment


          • #6
            Originally posted by darkbasic View Post
            Blah blah blah... but what about 16bit/channel?
            High bit depth will be in 3.0. It's been the plan for quite a while already.

            Comment


            • #7
              This will be highly useful for all filters/plugins.

              It should also be theoretically possible to cut the transfer time in half, keeping the working copy of the graphics on the card at all times, just sending updates on the merged graphics.
              Even for just a simple brush stroke, it will typically be applied on just one layer, and the visible image requires a fair amount of computations on top, which can then be copied back.

              Comment


              • #8
                Originally posted by darkbasic View Post
                Blah blah blah... but what about 16bit/channel?
                GeGL has had 16bit or greater per channel since the beginning. Are you perhaps talking about the Gimp?
                In which case the answer is when all the internals are replaced with GeGL.

                Comment


                • #9
                  Originally posted by Qaridarium View Post
                  this means if you have a realy fast cpu the cpu will be faster without the GPU!

                  only because "Most of the 483 milliseconds was spent transferring data to/from the GPU memory"
                  Only until Fusion is common Then moving data from cpu to gpu is a zero-copy operation. Perhaps it already is on Sandy (/Ivy for OpenCL) too?

                  Comment


                  • #10
                    Originally posted by curaga View Post
                    Only until Fusion is common Then moving data from cpu to gpu is a zero-copy operation. Perhaps it already is on Sandy (/Ivy for OpenCL) too?
                    Since they share both memory and an L3 cache I'd assume so...actually, I'm not sure if there would be a copy from the dedicated SB memory or not (though I know they share the same physical memory unlilke the AMD's integrated memory which had the optional sideboard).

                    Comment


                    • #11
                      from the article:

                      Of course, the big problem is that the open-source Linux graphics drivers don't yet support OpenCL. There's work in this direction over Gallium3D via another Google Summer of Code project, but it's not yet ready for end-users nor will it likely be anytime soon.

                      Denis who works on Clover wrote yesterday in his blog that things are almost done. However i have no idea when it will be merge ready.

                      Comment


                      • #12
                        Originally posted by kayosiii View Post
                        Are you perhaps talking about the Gimp?
                        In which case the answer is when all the internals are replaced with GeGL.
                        3.0 should be a massive improvement over the current situation if they manage to pull it all off, and launch it sometime in the next 10 years. Hopefully long before the decade is over :P Two features I miss in GIMP are 16bit/channel and Free Transform. Apart from that it's an excellent program. Even with the multi-window window mode. The single window mode somehow feels more awkward than the classic gimp mode. Oh, and what the hell are they thinking with that Export instead of Save for every single image format except .xcf???

                        @cl333r Are you the one working on that OpenCL backend for GEGL? If you are nice work Gimp really needs a performance boost like that.
                        Last edited by devius; 08-16-2011, 05:36 PM.

                        Comment


                        • #13
                          Originally posted by darkbasic View Post
                          Blah blah blah... but what about 16bit/channel?
                          Just think, its only been 11 years since the functionality was handed to them on a silver platter and they rejected it unanimously in favor of vapor ware.

                          It has got to be one of the biggest boneheaded moves in software development.

                          Comment


                          • #14
                            Originally posted by yogi_berra View Post
                            Just think, its only been 11 years since the functionality was handed to them on a silver platter and they rejected it unanimously in favor of vapor ware.
                            I know, I still use Cinepaint
                            ## VGA ##
                            AMD: X1950XTX, HD3870, HD5870
                            Intel: GMA45, HD3000 (Core i5 2500K)

                            Comment


                            • #15
                              Originally posted by yogi_berra View Post
                              It has got to be one of the biggest boneheaded moves in software development.
                              No, that seat is taken by Windows ME...

                              Comment

                              Working...
                              X