Announcement

Collapse
No announcement yet.

GIMP Punts Painting Off To Separate Thread

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by schmidtbag View Post
    Yup, and I don't really think there's anything wrong with that. I don't like how people are getting so fixated on multi-threading applications, because in a lot of cases, it isn't going to fix anything. There's nothing inherently wrong with a single-threaded application, just as long as being single-threaded is the most efficient approach (which sometimes it is, like the ls command).
    Actually quite many applications are perf limited. Consider basic package management. A typical system upgrade downloads and installs 50-300 packages. The only dependencies are: package needs to be downloaded before installing, a set of packages replacing a set of previous packages should be installed in an atomic transaction. This forms a directed graph with lots of branches. However a typical package manager does not even download stuff in the background while installing already fetched packages. This wastes tons of time. It's still better than any Windows attempt I've seen, but you could save minutes by doing that. Which is a lot.

    In the case of GIMP, I could definitely see how adding an additional thread specific to painting would improve usability.
    GIMP just recently switched to a fully GPU based image transform engine. It's pretty obvious as most filters are 100% data parallel and scale well to 10000 CUDA cores.

    The way I see it, 16 threads is pretty much the most the average home user should ever need, for CPU tasks. If there is ever a need for more threads than that, the workload should be done on the GPU.
    The compilation of larger projects is a great example of a task that can use as many cores as you can offer. The Phoronix test suite comes with kernel compilation benchmark. So far up to 32 cores have been useful. I can easily see how this scales to compiling KDE, GNome, Firefox, LibreOffice, and so on. The performance diff is just huge when using Gentoo.

    Well, most of the time, the problem is waiting on I/O rather than the CPU. But, a lot of these applications will basically lock up while the I/O is held back, which can be a bit irritating at times when you have tabs open from other non-bottlenecked sources.
    Again this is something the threads can help with. Linux has policy frameworks for disk "QoS". You can rearrange the app design so that less stuff depends on I/O. For instance in GIMP, most tasks are CPU/GPU bound, not I/O bound. After an image has been loaded, you don't use disk unless the workstation runs out of RAM. Video/audio transcoding is hugely CPU/GPU bound. You'd need a seriously powerful system to saturate even a single SATA disk. Crypto is pretty balanced. On my systems, AES can write/read about as fast as the main disk. When using slower disks, I can appreciate the fact that threading can provide me with a fresh core for doing actual paid work.

    Some examples of apps that don't use or use very few threads but should do more: TeX, LibreOffice, Inkscape, File Roller and other archiving tools,

    Comment


    • #32
      Originally posted by danieru View Post

      Gnome is worse than punching a mother. But still on GIMP, can you tell me about a better open source software with gui for *image manipulation*, not for drawing, on that krita is clearly superior.
      I'm just glad this was added before the final release of gimp 2.10. Have you tried gimp 2.10 recently? it's packing a lot of features if you come from the current stable 2.8. Im even already using gimp 2.10 at work for some photos,very stable for an RC.
      Still on Gimp 2.8 .. MyPaint is really good for drawing.

      Comment


      • #33
        You'd think this would have been done 10 years ago. In 2018 they should be thinking about offloading payinting to the GPU, not another thread on the CPU.

        Comment


        • #34
          Originally posted by caligula View Post
          However a typical package manager does not even download stuff in the background while installing already fetched packages. This wastes tons of time. It's still better than any Windows attempt I've seen, but you could save minutes by doing that. Which is a lot.
          This is a good idea, and something that had occurred to me too. I guess the problem with this is having the package manager know which packages it can install ahead of time without causing any issues in the event the update is either interrupted or broken. So yes, this idea is more efficient, but it decreases reliability.
          The compilation of larger projects is a great example of a task that can use as many cores as you can offer. The Phoronix test suite comes with kernel compilation benchmark. So far up to 32 cores have been useful. I can easily see how this scales to compiling KDE, GNome, Firefox, LibreOffice, and so on. The performance diff is just huge when using Gentoo.
          I don't disagree, but like I said, I was referring to the average person. The average person isn't going to regularly compile something as large as the things you mentioned; not even the average Linux desktop user. To reiterate, I can see GPUs getting increasingly powerful for home users, but I don't really see how CPUs will ever need more than 16 threads in the foreseeable future, unless there's some major technological breakthrough.
          You'd need a seriously powerful system to saturate even a single SATA disk. Crypto is pretty balanced. On my systems, AES can write/read about as fast as the main disk. When using slower disks, I can appreciate the fact that threading can provide me with a fresh core for doing actual paid work.
          I was referring more specifically about file browsers, and though not really implied, disk-heavy commands (like grep). Otherwise yes, I agree with everything you said.
          Some examples of apps that don't use or use very few threads but should do more: TeX, LibreOffice, Inkscape, File Roller and other archiving tools,
          LibreOffice is a program that would benefit from maybe 2 or 3 pre-set threads (so for example, one for logic and one for rendering) but it wouldn't benefit much beyond that. LibreOffice Calc meanwhile can take advantage of all cores, or use OpenCL, to do multi-cell calculations. I'm not sure if Base has anything like this (I never use Base).
          I thought File Roller was limited by whatever filetype you use? So for example, if you use tar.bz2, you should be able to use all available cores.

          Comment


          • #35
            Originally posted by caligula View Post
            Consider basic package management. A typical system upgrade downloads and installs 50-300 packages. The only dependencies are: package needs to be downloaded before installing, a set of packages replacing a set of previous packages should be installed in an atomic transaction. This forms a directed graph with lots of branches. However a typical package manager does not even download stuff in the background while installing already fetched packages. This wastes tons of time. It's still better than any Windows attempt I've seen, but you could save minutes by doing that. Which is a lot.
            This is why I'm thankful for Gentoo and Portage.

            Comment


            • #36
              Originally posted by caligula View Post
              GIMP just recently switched to a fully GPU based image transform engine.
              As a team member, I'm quite curious, where you could possibly read such a thing

              Comment


              • #37
                Originally posted by discordian View Post
                ***very very slow clapping***
                Of course these issues only come up after having a fix, before that you will be crucified for complaining about sluggish behavior.

                I guess in 20 years, Gnome will finally admit that things are crappy without a scenegraph and server-side rendering.
                1) Gnome and Gimp are separate projects.
                2) What sluggish behaviour? I'm honestly curious. Can you link some of your work, and discuss what parts were "sluggish"?

                Comment


                • #38
                  Originally posted by caligula View Post
                  Actually quite many applications are perf limited. Consider basic package management. A typical system upgrade downloads and installs 50-300 packages. The only dependencies are: package needs to be downloaded before installing, a set of packages replacing a set of previous packages should be installed in an atomic transaction. This forms a directed graph with lots of branches. However a typical package manager does not even download stuff in the background while installing already fetched packages. This wastes tons of time. It's still better than any Windows attempt I've seen, but you could save minutes by doing that. Which is a lot.


                  GIMP just recently switched to a fully GPU based image transform engine. It's pretty obvious as most filters are 100% data parallel and scale well to 10000 CUDA cores.


                  The compilation of larger projects is a great example of a task that can use as many cores as you can offer. The Phoronix test suite comes with kernel compilation benchmark. So far up to 32 cores have been useful. I can easily see how this scales to compiling KDE, GNome, Firefox, LibreOffice, and so on. The performance diff is just huge when using Gentoo.



                  Again this is something the threads can help with. Linux has policy frameworks for disk "QoS". You can rearrange the app design so that less stuff depends on I/O. For instance in GIMP, most tasks are CPU/GPU bound, not I/O bound. After an image has been loaded, you don't use disk unless the workstation runs out of RAM. Video/audio transcoding is hugely CPU/GPU bound. You'd need a seriously powerful system to saturate even a single SATA disk. Crypto is pretty balanced. On my systems, AES can write/read about as fast as the main disk. When using slower disks, I can appreciate the fact that threading can provide me with a fresh core for doing actual paid work.

                  Some examples of apps that don't use or use very few threads but should do more: TeX, LibreOffice, Inkscape, File Roller and other archiving tools,
                  The package manager trick doesn't really work for sanely designed package managers. Typically design goals go like this but no one manages to fully do it yet
                  1) single update should be considered atomic. either it fully works or is fully rolled back
                  2) multiple updates are not allowed at the same time to ensure update database consistency and making rollbacks in large transactions simpler

                  This is exactly why nothing is ever installed before everything is downloaded. What can be done to make this more efficient is background downloading eg BITS in Windows but this is problematic as it may saturate system resources while things are being downloaded. Downloading in cost-efficient package management systems means diffs instead of full downloads which results in considerable CPU and disk usage (random access) during download
                  Last edited by nanonyme; 13 April 2018, 03:17 AM.

                  Comment


                  • #39
                    Originally posted by prokoudine View Post

                    As a team member, I'm quite curious, where you could possibly read such a thing
                    Maybe the claim was a bit too strong. Potentially GPU accelerated, https://wiki.gimp.org/wiki/Hacking:P...ilters_to_GEGL

                    Comment


                    • #40
                      Originally posted by nanonyme View Post

                      The package manager trick doesn't really work for sanely designed package managers. Typically design goals go like this but no one manages to fully do it yet
                      1) single update should be considered atomic. either it fully works or is fully rolled back
                      2) multiple updates are not allowed at the same time to ensure update database consistency and making rollbacks in large transactions simpler
                      The atomic nature of package management covers only the installation/removal of a single package. It makes no claims about the downloading of packages. Besides, you can end up with a broken system with almost any of the current package managers if some package fails to install. They won't simulate the whole installation before actually doing it. Been there, done that with Ubuntu, Debian, Fedora, Gentoo, Arch, ...

                      - This is exactly why nothing is ever installed before everything is downloaded.

                      - What can be done to make this more efficient is background downloading eg BITS in Windows but this is problematic as it may saturate system resources while things are being downloaded.
                      Wrong. Wrong. What kind of connection you'd think an Average Joe had? 10 Gbps ?

                      Downloading in cost-efficient package management systems means diffs instead of full downloads which results in considerable CPU and disk usage (random access) during download
                      Why would that be the case? The repository can precalculate and/or cache diffs between common versions and then reuse this information for multiple clients. The client only needs to tell which version has been installed and what's the target version.

                      Comment

                      Working...
                      X