Announcement

Collapse
No announcement yet.

Krita Looking More At GPU Acceleration & AI In 2024

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by sophisticles View Post
    This is open source development in a nutshell, they can't even finish fixing their current software stack and they are worried about implementing new, state of the art, features?
    KDE had to port to Qt 6 because The Qt Company is now only continuing to provide updates for Qt 5 to paying LTS customers. It was less work than taking responsibility for maintaining Qt 5 going forward.

    Same reason Xfce and MATE didn't fork GTK+ 2.x and continue to maintain it and why nobody is stepping up to push back against the Wayland tide.​

    Comment


    • #22
      Originally posted by Weasel View Post
      Spotted the SJW? It might come as a HUGE shock to you, but hear me out: not every opinion is valid or should be allowed.

      It's not an opinion when it's factually wrong. You can use ControlNet to, for example, color a sketch or simply recolor a grayscale image. But a lot more of course. How is that "bland" again?

      You can't and shouldn't respect opinions that are factually wrong. They are literal misinformation. Period.
      Give this a watch. The TL;DR is that it's very important to not train generative A.I. on images generated with generative A.I. because there's a JPEG-esque generation loss problem where you get massively less diversity with even one cycle and, the more you do it, the harder it is to force it out of the the poses, styles, tropes, etc. that it starts to fixate on.

      (And yes, I've used Stable Diffusion ControlNet models. You'd be surprised how defiant SD can get about giving you what you want if it's not already lurking in the training data strongly enough.)

      Comment


      • #23
        Once the cat is out of the bag, it's out of the bag.

        AI Art & Video is here to stay. Shooting off our knee-caps it in the west will just guarantee it grows in the east and they get the money and control. It's never going away.

        Just like every other industry tries to resist change,

        The Music Industry vs Napster & Online Music -- The Cat Won.

        Manual Labor Age vs The Industrial Revolution & Automation -- The Cat Won.

        There are countless other scenarios of people trying to protect their employment and income but so far the score is at least -- Cat: 2 -- Bag: 0

        --

        It would be nice if Krita beefed up their multi-core usage -- I use it with 16 cores -- 32 threads and a RX 6900 XT and sometimes the larger files feel sluggish.

        Also, just general improvements with Large Format Documents would be appreciated 10k+ pixels.

        Comment


        • #24
          Originally posted by ElectricPrism View Post
          Once the cat is out of the bag, it's out of the bag.

          AI Art & Video is here to stay. Shooting off our knee-caps it in the west will just guarantee it grows in the east and they get the money and control. It's never going away.

          Just like every other industry tries to resist change,

          The Music Industry vs Napster & Online Music -- The Cat Won.

          Manual Labor Age vs The Industrial Revolution & Automation -- The Cat Won.

          There are countless other scenarios of people trying to protect their employment and income but so far the score is at least -- Cat: 2 -- Bag: 0
          And which of the 2 examples required stealing or free work to be profitable?
          There is no problem with LLMs and SDs as a tech, its the stealing to make them good that is the problem.
          And it seems they are not worth the money needed to pay for the works needed for their training.
          Or at least OpenAI dont think so and fight hard to not show how much of other people work they stole.

          Comment


          • #25
            Originally posted by ssokolow View Post
            Give this a watch. The TL;DR is that it's very important to not train generative A.I. on images generated with generative A.I. because there's a JPEG-esque generation loss problem where you get massively less diversity with even one cycle and, the more you do it, the harder it is to force it out of the the poses, styles, tropes, etc. that it starts to fixate on.

            (And yes, I've used Stable Diffusion ControlNet models. You'd be surprised how defiant SD can get about giving you what you want if it's not already lurking in the training data strongly enough.)
            Yeah, that's a valid concern. Thanks for the video. But it's only a problem because people are lazy with prompts (or in this case, ControlNet). Or rather I should say, with conditioning, since text is just one type of conditioning.

            BTW I found the pose ControlNets, despite their popularity, to be pretty terrible, but stuff like normal CN, depth CN or lineart/canny are simply as good as normal artists would be (helps if you to stop them after about 40% steps too and lower their weight); you don't even have to use image as input, you can draw lineart yourself, or render normal/depth maps with Blender for instance.

            The fact is people tend to tag their images more precisely than what they want to generate out of them. You have to be specific here.

            Comment


            • #26
              Originally posted by pixo View Post
              And which of the 2 examples required stealing or free work to be profitable?
              There is no problem with LLMs and SDs as a tech, its the stealing to make them good that is the problem.
              And it seems they are not worth the money needed to pay for the works needed for their training.
              Or at least OpenAI dont think so and fight hard to not show how much of other people work they stole.
              Oh you mean how a human could look at the publicly available book or whatever and take some "inspiration" from it eh? Humans were the first thieves so leave the AI alone.

              No, humans do not get special privileges. Unless humans are barred from even getting inspiration from it (such as the GPL and Microsoft's Copilot, which I agree should be illegal), then AI shouldn't be either.

              Freaking luddites.

              Comment


              • #27
                Originally posted by Weasel View Post
                Spotted the SJW? It might come as a HUGE shock to you, but hear me out: not every opinion is valid or should be allowed.

                It's not an opinion when it's factually wrong. You can use ControlNet to, for example, color a sketch or simply recolor a grayscale image. But a lot more of course. How is that "bland" again?

                You can't and shouldn't respect opinions that are factually wrong. They are literal misinformation. Period.
                Who decides what is "factual"? Do not be too quick to answer.... The person who decides "what is factual and what is not" in a society which condones "silencing the not factual opinions", is the person who controls that society..... AKA tyranny....

                Comment


                • #28
                  Originally posted by Weasel View Post
                  No, humans do not get special privileges.
                  Are you actually an AI program? You forget that humans have human rights and has to follow laws, judge other humans and rights. An AI is not in such a place. AI does not have any special rights (yet) and should be treated like a knife or any other C program. Your argumentation that because "some" humans steal others code, would mean that every AI is allowed to steal too? And a human can't steal millions and billions of codes in an automated way, there is a huge difference.

                  Comment


                  • #29
                    Originally posted by TemplarGR View Post
                    Who decides what is "factual"? Do not be too quick to answer.... The person who decides "what is factual and what is not" in a society which condones "silencing the not factual opinions", is the person who controls that society..... AKA tyranny....
                    The whole point of facts is that they can be measured and proven to be correct (at least beyond reasonable doubt).

                    For generative AI, you can condition them with far more than just text. Text prompts is just one way to do it. You can also combine conditioning of course. For example you can combine text prompt + a sketch or lineart (that's the point of ControlNet). Far more than this obviously (pose control, normal maps, depth maps, various edge algorithms used as conditioning, and so on, there's a ton).

                    Now tell me what is "bland" about this? Where's the human creativity in coloring an image? Those people shouldn't be allowed to voice their opinion because they are spreading misinformation and, as you can see, sadly have REAL results when it comes to making decisions based on WRONG INFORMATION from other clueless people.

                    Simplest case is you can use img2img and only slightly alter the image, without even using ControlNet, how is that bland again?

                    Comment


                    • #30
                      Originally posted by byteabit View Post
                      Are you actually an AI program? You forget that humans have human rights and has to follow laws, judge other humans and rights. An AI is not in such a place. AI does not have any special rights (yet) and should be treated like a knife or any other C program. Your argumentation that because "some" humans steal others code, would mean that every AI is allowed to steal too? And a human can't steal millions and billions of codes in an automated way, there is a huge difference.
                      That's irrelevant. AIs we talk about are just tools, they're not self aware and will never be because their memory is read-only (only writable during training, not inference).

                      Let me rephrase that so you get it: Human artists do not get special privileges. Yes, there's other types of humans, whether you like it or not. Such as, the humans who designed the bloody AI in the first place.

                      Why should human artists be allowed to scrape other artists' work and take inspiration from them, using tools such as the web browser (e.g. on an artist site), but human developers who created an AI to automatically scrape it (just a different tool than a web browser...) should not be allowed to use their tool?

                      Get real. That's literally infringing on their human rights to use their own tools for stuff that you have access to. WTF?

                      Don't hate them just because they're smart. It's called industrialization and automation. Keep doing manual labor lol.

                      Comment

                      Working...
                      X