Announcement

Collapse
No announcement yet.

GIMP Punts Painting Off To Separate Thread

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by schmidtbag View Post
    Not everything can be efficiently multi-threaded, or is it a necessity. I'm not saying this is what you're hoping to see, but it is something I think is worth pointing out. CPUs are very good at multi-tasking, but unlike GPUs, they're relatively inefficient at parallelization.
    That's certainly true, but the decision tree is more complex than that. First of all, we need to decide if the task is already fast enough. Many of the 'can't be parallelized' tasks are already fast enough:

    $ time ls
    real 0m0,001s
    user 0m0,000s
    sys 0m0,001s

    So we can just forget all the trivial tasks that are already fast enough.

    Another category of tasks can be found in interactive applications. GUI tasks are real-time, asynchronous, and event based. In GUIs it is important to optimize the time between an spontantenous event and the reaction. Sometimes this fast reaction time decreases performance. This can be done even if the machine has only one processor core. This is still a problem in some applications. For example, claws mail can't even shut down when its fetching mail. Isn't that great.

    So for example, when you compile a program, each core is building their own C file simultaneously; they don't depend on each other. Meanwhile, when you are doing raytracing, it is important that each thread knows what the other thread is doing in order to remain accurate.
    You can compile independent modules independently (to some extent) without knowing about the other modules. The critical part when you need to propagate knowledge comes after quite many steps. The first phases in a compilation are quite heavy. Traversing the file system tree and computing dependencies is almost 100% I/O bound and slow, especially when you access spinning disks or network file systems. Parsing is pretty I/O bound and 100% independent. Ok, C is a bad choice for a language here as it doesn't have proper modules so you can't even parse fully independently. Depends on your ray-tracing engine but basic ray-tracing is trivially parallelizable just like fractals. You can compute each photon independently.

    Each thread in a CPU can handle any task that is scheduled to it at any time, whereas to my understanding, a GPU must process a "single" parallelized task at a time throughout each of its cores per clock. What this means is if you are running a parallelized task, it is very difficult (and sometimes impossible, in the case of Hyper Threading) to synchronize each CPU thread, which results in wasted clock cycles.
    GPUs have a computation model with local and non-local computation. In CUDA there are "blocks" which allow data parallelism inside. The "grids" are more like task parallel computation. You can also spawn new tasks from inside existing tasks in CUDA and also run multiple tasks concurrently. The runtime system will manage all that. There are some limits but the idea is that you can "stream" independent sub computations from a large set. CUDA can benefit a lot if the data access patterns are SIMD like. It can automatically use the wider memory bus and multiple cores unlike CPUs which require explicit AVX.

    The main difference is that GPUs are still more optimized for data parallel whereas CPUs are good at running heavy weight tasks. CPUs work just fine when you have 2-32 threads and cores, but the synchronization becomes more expensive as more threads need to participate. CPUs can emulate the GPU workloads but of course they don't perform as well since GPU workloads don't need huge caches or branch prediction and other useless single threaded optimizations..

    This is a great example of how to properly take advantage of the threads in a CPU. Some people may say "why can't lame, oggenc, or flac use all the available cores?" but I don't think they should, because in some cases it could actually slow down the output while adding a lot of unnecessary complexity.
    I guess one could argue that the traditional unix tools work just fine. They do one job fine and you're not even supposed to expect multi-threading. Unfortunately there's no high level framework for using the tools properly. Most tools like file managers suck at managing large tasks. We would need some tool like 'handbrake' for all the mundane tasks.

    Comment


    • #22
      Originally posted by caligula View Post
      That's certainly true, but the decision tree is more complex than that. First of all, we need to decide if the task is already fast enough. Many of the 'can't be parallelized' tasks are already fast enough:

      So we can just forget all the trivial tasks that are already fast enough.
      I completely agree - this is one of the reasons I feel some people push parallelization a little too hard.
      You can compile independent modules independently (to some extent) without knowing about the other modules.
      ...
      Parsing is pretty I/O bound and 100% independent.
      I agree, that was kind of my point.
      Depends on your ray-tracing engine but basic ray-tracing is trivially parallelizable just like fractals. You can compute each photon independently.
      This is true - just about every calculation can be serialized. But my point was raytracing an example of a task that benefits from parallel processing, which not all tasks do.
      The main difference is that GPUs are still more optimized for data parallel whereas CPUs are good at running heavy weight tasks. CPUs work just fine when you have 2-32 threads and cores, but the synchronization becomes more expensive as more threads need to participate. CPUs can emulate the GPU workloads but of course they don't perform as well since GPU workloads don't need huge caches or branch prediction and other useless single threaded optimizations..
      Yup, and I don't really think there's anything wrong with that. I don't like how people are getting so fixated on multi-threading applications, because in a lot of cases, it isn't going to fix anything. There's nothing inherently wrong with a single-threaded application, just as long as being single-threaded is the most efficient approach (which sometimes it is, like the ls command). In the case of GIMP, I could definitely see how adding an additional thread specific to painting would improve usability.
      The way I see it, 16 threads is pretty much the most the average home user should ever need, for CPU tasks. If there is ever a need for more threads than that, the workload should be done on the GPU.
      Most tools like file managers suck at managing large tasks. We would need some tool like 'handbrake' for all the mundane tasks.
      Well, most of the time, the problem is waiting on I/O rather than the CPU. But, a lot of these applications will basically lock up while the I/O is held back, which can be a bit irritating at times when you have tabs open from other non-bottlenecked sources.

      Comment


      • #23
        Originally posted by dos1 View Post
        ...and massively downgrade their open/save dialogs? Thanks, but no thanks :P
        They can always switch to Qt. GTK2 is dead, and hopefully distros will start purging it from repos for good at some point.

        Comment


        • #24
          Originally posted by shmerl View Post
          They can always switch to Qt. GTK2 is dead, and hopefully distros will start purging it from repos for good at some point.
          GTK2 is dying, isn't going to be dead for a long while. Unfortunately, there are still too many common programs that use it. XFCE is also still dependent upon it, and only just recently has Firefox managed to ditch it. I'm sure GTK4 will be released before GTK2 can be dropped from repos.

          Comment


          • #25
            Originally posted by schmidtbag View Post
            GTK2 is dying, isn't going to be dead for a long while. Unfortunately, there are still too many common programs that use it. XFCE is also still dependent upon it, and only just recently has Firefox managed to ditch it. I'm sure GTK4 will be released before GTK2 can be dropped from repos.
            IIUC GTK 4+ is going to release more regularly similarly to Firefox and others who do revisions.

            Comment


            • #26
              Originally posted by cen1 View Post
              So... why is it so hard to thread this stuff if it only took a few 100 of lines of code? Don't even get me started with GNOME desktop which apparently runs even the extensions in the same thread! No wonder my desktop drops to 5FPS when I press superkey, can't even handle the animation.

              I am switching to KDE in the immediate future, this is a complete joke.
              Gnome is worse than punching a mother. But still on GIMP, can you tell me about a better open source software with gui for *image manipulation*, not for drawing, on that krita is clearly superior.
              I'm just glad this was added before the final release of gimp 2.10. Have you tried gimp 2.10 recently? it's packing a lot of features if you come from the current stable 2.8. Im even already using gimp 2.10 at work for some photos,very stable for an RC.

              Comment


              • #27
                Originally posted by cen1 View Post

                We are in <current_year>
                I never get tired of that argument

                </sarcasm>

                Comment


                • #28
                  We are in <current_year>.

                  Reminds me of "chaos theory".



                  "GIMP sucks beacause no humans live on the moon. Furthermore I have blue socks and can drink 2 galons of Mountain Dew per hour. Everyone knows serious image editors use HaikuOS -- Checkmate Athiests."

                  Comment


                  • #29
                    Originally posted by discordian View Post
                    ***very very slow clapping***
                    Of course these issues only come up after having a fix, before that you will be crucified for complaining about sluggish behavior.

                    I guess in 20 years, Gnome will finally admit that things are crappy without a scenegraph and server-side rendering.
                    All those features are great, and AFAIK Gimp finally started developing features on branches, which should help development and avoid mistakes done in the past. But hey if you want to start a brand new piece of software with a whole better internal architecture then i'm all in.

                    Comment


                    • #30
                      GTKMM4 needs to add the ability to autoconnect a signal to a class method, If they do that I'll never bother learning QT.

                      Comment

                      Working...
                      X