Announcement

Collapse
No announcement yet.

Lisa Su Says The "Team Is On It" After Tweet About Open-Source AMD GPU Firmware

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by sophisticles View Post
    I think it's hysterical that this guy thinks that AMD open sourcing everything is the solution because he thinks he can fix what's wrong.

    I suspect the problem is hardware related, if AMD's engineers who designed the hardware can't get the software to work right, then no one outside the company will either.

    This guy is right that AMD needs to their stuff, but not in the way he thinks.

    AMD needs to fix their hardware.
    I agree with you on all but hardware issues.
    Opensourcing everything is foolish, only AMD must fix the bugs of their software.
    LIbraries, drivers and frameworks need to be rock solid.

    Comment


    • #32
      Originally posted by ssokolow View Post

      Does A.I. Art count? Stable Diffusion is one of the more leisurely things that motivated me to buy an nVidia card because I could trust it would Just Work™, while the AMD GPU in my Ryzen 5 only shows up in the Vulkan port of the Waifu2x upscaler that I also have installed.
      Yeah, I was referring to 'Productivity in general' - and although I usually mention GPU Compute that includes AI too - although, I believe AMD truly wants to invest more resources to improve the situation. Whether they can or not - remains to be seen.

      At the moment, AMD sucks in AI - although, there's often articles by some companies/tech websites - that insist that AI companies are using AMD cards for this use. It's usually the same culprits - Tom's Hardware, for e.g.

      But, most of the sources out there - indicate AMD is way behind in AI - not unlike their lacking performance in GPU Compute (e.g. Blender et al.):

      NVIDIA has released a TensorRT extension for Stable Diffusion using Automatic 1111, promising significant performance gains. But does it work as advertised?



      The only advance AMD ever has is with Shark:

      Little Demo of using SHARK to genereate images with Stable Diffusion on an AMD Radeon 7900 XTX (MBA) . Slightly overclocked as you can see in the settings. B...

      NVIDIA rightfully dominates the scene in the AI discussion. their GPUs are ready to go and enjoy precedence among professionals and companies looking to enter t

      NVIDIA is absolutely dominating the AI conversation right and for good measure - their GPUs perform out-of-the-box and are a top choice for professionals and businesses that want to dabble in consumer AI. But just this week, both Intel and AMD optimized their software stacks to get massive speedups in generative AI which has seen AMD's RTX 7900 XTX get higher performance per dollar than an NVIDIA RTX 4080 in generative AI (specifically Stable Diffusion with A111/Xformers). Considering Stable Diffusion accounts for the vast majority of non-SaaS, localized generative AI right now - this is a major milestone and finally […]


      Most recent charts show Nvidia with a huge performance lead except for when using Shark.

      Comment


      • #33
        Originally posted by sophisticles View Post
        AMD needs to fix their hardware.
        AMD need to stop paying "influencers" to hype their unusable in reality trash.
        This was very clean on RDNA3 release - amount of bugs that RDNA3 has in Linux - only Intel has more.
        ROCM is completely 100% unusable.

        Yesterday I was watching random streamer playing video games in Windows on AMD GPU - stream crashes every 20-30 mins. This is just joke not a "GPU".

        Comment


        • #34
          Originally posted by ssokolow View Post

          Does A.I. Art count? Stable Diffusion is one of the more leisurely things that motivated me to buy an nVidia card because I could trust it would Just Work™, while the AMD GPU in my Ryzen 5 only shows up in the Vulkan port of the Waifu2x upscaler that I also have installed.
          Stable diffusion is running fine on my RX 6800. But i heard the most popular implementation of it is nvidia optimized and runs poorly on AMD. It might also be easier to get up and running on nvidia, IDK. Anyways with SD.next im getting decent performance, happy with this for GPU i bought for linux gaming not caring about AI.

          In some benchmarks 7900XTX is almost as fast as 4090 if comparing both using stable diffusion implementation that performs well on each GPU. RX 7000 series got some extra stuff that improves AI performance, while RX 6000 is significantly slower than RTX 3000 series.

          Comment


          • #35
            Originally posted by danilw View Post
            AMD need to stop paying "influencers" to hype their unusable in reality trash.
            This was very clean on RDNA3 release - amount of bugs that RDNA3 has in Linux - only Intel has more.
            ROCM is completely 100% unusable.

            Yesterday I was watching random streamer playing video games in Windows on AMD GPU - stream crashes every 20-30 mins. This is just joke not a "GPU".
            Phoronix AMD fanboys hype AMD for free - must be nice for AMD to have free PR?
            I want AMD to be good for graphics - not just Linux but the reviews are often mixed and it's usually because of FOSS zealots and ppl that aren't objective - and when u read the reviews of ppl that actually test the cards in the software/programs - the results are often disappointing.
            AMD didn't care about GPU Compute - 3D etc. - but, now they seem to care a bit more about AI - and that seems to be where they are positioning their attention - and maybe the only reason they're going to address the ROCm mess. Will that help with other areas (like Compute) - I dunno...

            Comment


            • #36
              Originally posted by Ph42oN View Post
              Stable diffusion is running fine on my RX 6800. But i heard the most popular implementation of it is nvidia optimized and runs poorly on AMD. It might also be easier to get up and running on nvidia, IDK. Anyways with SD.next im getting decent performance, happy with this for GPU i bought for linux gaming not caring about AI.

              In some benchmarks 7900XTX is almost as fast as 4090 if comparing both using stable diffusion implementation that performs well on each GPU. RX 7000 series got some extra stuff that improves AI performance, while RX 6000 is significantly slower than RTX 3000 series.
              Are you using SHARK? It seems to be one of the few that offers good performance for AMD gpus?
              Stable Diffusion is seeing more use for professional content creation work. How do NVIDIA GeForce and AMD Radeon cards compare in this workflow?



              Note, the "most commonly used implementation" is Automatic 1111 - and AMD gpus supposedly perform quite poorly - very unfortunate but if you check out the links - you'll discover that is the conclusion 'most in the know' take.
              Anyway, here's an interesting 'write-up' or 'how-to' for setting up AMD gpus in Linux for AI/SD:
              Learn how to install and use Stable Diffusion on Linux (Ubuntu) with an AMD GPU to generate high-quality images from text.

              I'd like to try this (if I get an amd gpu), although my main area (of interest/use) is video editing and Blender - this looks pretty interesting.

              Comment


              • #37
                Originally posted by Ph42oN View Post

                Stable diffusion is running fine on my RX 6800. But i heard the most popular implementation of it is nvidia optimized and runs poorly on AMD. It might also be easier to get up and running on nvidia, IDK. Anyways with SD.next im getting decent performance, happy with this for GPU i bought for linux gaming not caring about AI.

                In some benchmarks 7900XTX is almost as fast as 4090 if comparing both using stable diffusion implementation that performs well on each GPU. RX 7000 series got some extra stuff that improves AI performance, while RX 6000 is significantly slower than RTX 3000 series.
                Getting completely and utterly fed up and frustrated after several days of trying to replace Kohya SS's build of TensorFlow with one that didn't require AVX so it'd work on my 2011 Athlon II was what finally prompted me to migrate to a brand new Ryzen 5... and I'm a programmer. ML-world Python dependency management is hell.

                With nVidia, every SD tool I've tried (Easy Diffusion, automatic1111, Kohya SS, etc.) is as simple as this:

                Code:
                mkdir -p ~/opt/whatever
                cd ~/opt/whatever
                wget https://whatever/setup.sh
                firejail --noprofile --whitelist=$PWD --private-tmp --caps.drop=all --nodbus --nonewprivs --noroot --nosound --private-dev bash ./setup.sh
                ​
                TL;DR (for the above): I haven't had time to run Gentoo since I switched to Lubuntu (and later Kubuntu) in 2012. Why would I have time to engaged in the same sorts of activities with AMD libraries?

                ...plus, having just reinstalled my mother's laptop with AMD graphics (after RMAing a dead SSD) and discovered that it isn't just out-of-tree modules that are causing UBSAN failures with the Linux 6.5 HWE kernel from Kubuntu 22.04 LTS, I'm more vindicated than ever in choosing the GPU brand with drivers that are a single apt-get away from upgrading/downgrading independently from the rest of the kernel and have only needed to be changed from the distro-picked version three times in the last 20 years, one of which was because my new GTX750 was too new for the distro-picked drivers and one of which was a memory leak that probably went unnoticed because most people don't keep X11 logged in for weeks at a time.

                (My Ryzen? Can run 6.5 with nVidia as long as I'm willing to give up VirtualBox. My mother's laptop with AMD graphics? Had to downgrade her from the 6.5 in linux-image-generic-hwe-22.04 ​to the 5.15 in linux-image-generic​ to make it work.)

                ...and, in case anyone's wondering, the reason you'd want the three I listed is:
                • Easy Diffusion has the most polished UX but isn't extensible and doesn't currently support loading LoCons.
                • automatic1111's UI is an annoying hodge-podge with no apparent equivalent to Easy Diffusion's support for auto-saving and manually loading generation parameters as JSON files, but it's what all the plugins are written for.
                • Kohya SS is the most commonly recommended and tutorial'd way to make your own LoRAs.
                I find automatic1111's UX so inferior to Easy Diffusion's that I only turn to it as a last resort.

                There's also LoRA_Easy_Training_Scripts, which I think is trying to be a QML-based "Easy Diffusion to Kohya SS's automatic1111", but I haven't tried it yet.
                Last edited by ssokolow; 13 March 2024, 04:52 PM.

                Comment


                • #38
                  Originally posted by Panix View Post
                  https://github.com/AUTOMATIC1111/sta...scussions/8344
                  Note, the "most commonly used implementation" is Automatic 1111 - and AMD gpus supposedly perform quite poorly - very unfortunate but if you check out the links - you'll discover that is the conclusion 'most in the know' take.
                  *nod* ...and I bought a tiny (i.e. was compatible with my old Athlon II X2's motherboard with the SATA connectors off the end of the x16 slot), single-fan (i.e. quiet) RTX 3060 12GiB at a great price during Cyber Monday sales.

                  The Tom's Hardware chart posted on that GitHub thread doesn't have anything in the same performance bracket from AMD and I certainly wouldn't have been able to afford something that competes with RTX 3090s, even before adding two to four of the $21 ultra-low-profile right-angle SATA cables I'd have needed to use it with the mobo I was running it in at the time.

                  Plus, I don't have air conditioning, so I'm reluctant to run big, beefy cards with high wattage requirements for that reason too. The #1 concern for my new Ryzen was "What can I get without increasing the CPU TDP from my usual 65W?" and I'm likely to turn off two of my three CFL-backlit LCDs when SDing during the summer to mitigate the heat output of a ~175W GPU that nvidia-smi says seems to like to draw around 150W while SDing.

                  (That said, given the trade-offs I had to make, if AMD were to push for an architecture that introduces GDDR DIMMs and GPUs with RAM upgrade slots, I'd be taking notice.)​
                  Last edited by ssokolow; 13 March 2024, 05:05 PM.

                  Comment


                  • #39
                    Originally posted by ssokolow View Post

                    *nod* ...and I bought a tiny (i.e. was compatible with my old Athlon II X2's motherboard with the SATA connectors off the end of the x16 slot), single-fan (i.e. quiet) RTX 3060 12GiB at a great price during Cyber Monday sales.

                    The Tom's Hardware chart posted on that GitHub thread doesn't have anything in the same performance bracket from AMD and I certainly wouldn't have been able to afford something that competes with RTX 3090s, even before adding two to four of the $21 ultra-low-profile right-angle SATA cables I'd have needed to use it with the mobo I was running it in at the time.

                    Plus, I don't have air conditioning, so I'm reluctant to run big, beefy cards with high wattage requirements for that reason too. The #1 concern for my new Ryzen was "What can I get without increasing the CPU TDP from my usual 65W?" and I'm likely to turn off two of my three CFL-backlit LCDs when SDing during the summer to mitigate the heat output of a ~175W GPU that nvidia-smi says seems to like to draw around 150W while SDing.

                    (That said, given the trade-offs I had to make, if AMD were to push for an architecture that introduces GDDR DIMMs and GPUs with RAM upgrade slots, I'd be taking notice.)​
                    I echo your sentiments - lately, I'm comparing to a (used) 3090 - the price of the used AMD gpu is about $100-$300 more - depending on the seller - and I don't really try negotiating too much - unless I'm about/ready to commit - or that interested - it just seems a shame to get a gpu that is almost out of warranty at this point - and the 3090 was a bit of a power hungry hog too -with some temp issues to speak of - depending on what the seller did with it - sometimes they re-pasted it or replaced memory pads or whatever? The 3090 shouldn't be much better at productivity but it was until recently? The 7900 xtx is the current gen and supposedly has 'better' (depending on perspective) Linux support and if AMD improves ROCm support - does that mean only in AI or other software requiring/utilizing ROCm? AMD supposedly made HIP-RT open source - what does that mean for Blender? Will it mean you can use it in Linux and performance will only be mediocre - but, at least, it works in Linux?
                    The only concern with 7900 xtx cards is that they do run hot - and so with buying used - I have to be concerned with that and my psu is a 850W PSU - should be okay but it would suck if I'd have to upgrade that, too (due to power concerns of the gpu) - however, that's the same concern with a used 3090.

                    Comment

                    Working...
                    X