Announcement

Collapse
No announcement yet.

Tiny Corp At "70%" Confidence For AMD To Open-Source Some Relevant GPU Firmware

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Mathias View Post
    I'm sure AMD will move heaven and earth to sell those 6 XTXs... I know he might sell 1000 tinyboxes with 6 each. His words just sound so ridiculous.
    They don't really care about Tiny Corp specifically. But they do care that a quasi famous mouthpiece (from his iOS / PlayStation days) that can generate a lot of negative tech media press is (rightfully) crapping all over AMD's GPU compute ecosystem in public.

    Comment


    • #12
      Originally posted by sobrus View Post

      I have both CPU and GPU from AMD, but don't quite think it's true. AMD GPUs are nowhere near nVidia. They barely match GeForce in games.

      There was a time when ATi dominated with R9700, but these are long gone. nVidia was first to bring DX10 class card to PC (Geforce 8), along with CUDA, something that is still out of reach for AMD, in 2006. They also were first to bring ray-tracing and tensor cores to consumer cards. They also were first to bring DLSS and Frame Generation...First to have dual issue units, their tensor cores can work parallel with compute cores... AMD is just copying their ideas, a few years late. AMD was not present in high end GPUs for years, and it seems like this will be again the case with RDNA4. I don't think it's because AMD has superior technology that they don't want to use.

      How on earth can anyone claim AMD GPUs are "best"? Because they are a bit cheaper per fps in games? Have usually more memory? Or open source drivers?
      Don't get me wrong, I have 6800XT, but let's be honest. If nVidia had open source drivers, I'd probably buy nVidia. Radeons are cheaper for a reason (like half baked ROCm or ray-tracing partially emulated on shaders).
      I do wish AMD would actually lead the way with some important new GPU tech. We get free and open source versions of similar tech a while later which is great in and of itself. But it's always so reactionary to what NVIDIA is doing. G-SYNC, ray tracing, DLSS. "Hey let's try to do those same things almost as good but for free and open source!" isn't usually a strategy to surpass your competition.

      Comment


      • #13
        Originally posted by Railander View Post

        why do you think AMD, which makes both the best GPUs and (the best) CPUs, is worth 1/7 of nvidia which makes only (the best) GPUs?
        there is gigantic money to be made on AI and compute as long as it actually works. nobody is working with AMD GPUs because they just don't work and they can't even spend time fixing it because it's not opensource, regardless of how good the hardware theoretically is.
        I have no idea why you replied to me. I have AMD GPU and CPU and would immediately buy a 7900, if I was confident in Rocm. I'm waiting since forever for good AMD compute solutions. I just said, George Hots with his tiny tinycorp buying 6 AMD Cards is not the reason AMD would do anything. I'm sure AMD sells as many MI300 cards as they can produce to big clusters at OpenAI etc. George specifically wants to use Low-Budget / Low-Margin cards (compared to Instinct) for professional compute, because they have better perf/$ for him. That's not the market AMD is especially excited about. I whish him all the luck and would be excited when he succeeds. My only point was: Threatening to not buy 6 consumer cards sounds ridiculous in a Billion $ market.

        Comment


        • #14
          Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post

          I do wish AMD would actually lead the way with some important new GPU tech. We get free and open source versions of similar tech a while later which is great in and of itself. But it's always so reactionary to what NVIDIA is doing. G-SYNC, ray tracing, DLSS. "Hey let's try to do those same things almost as good but for free and open source!" isn't usually a strategy to surpass your competition.
          AMD has done a good job here, bringing some nice technology to wider audience. FSR is nice too, really good given it works without tensor cores and optical flow unit. They have talented software engineers there. But yes, a bit of innovation wouldn't hurt. nVidia has been leading this segment for over 20 years.

          btw. I've edited my post (that you've quoted), because it sounded too harsh. I do like AMD. And I agree with the rest of what @Railander​ has written.

          Comment


          • #15
            Originally posted by Railander View Post

            why do you think AMD, which makes both the best GPUs and (the best) CPUs, is worth 1/7 of nvidia which makes only (the best) GPUs?
            there is gigantic money to be made on AI and compute as long as it actually works. nobody is working with AMD GPUs because they just don't work and they can't even spend time fixing it because it's not opensource, regardless of how good the hardware theoretically is.
            First, Nvidia GPUs generally have better hardware all around. I use and love my AMD GPU on Linux, but NVidia just packs more and better stuff into their GPUs. We're not going to talk about price/performance here.

            As for the AI and compute stuff, AMD hardware is actually perfectly viable for it, and ROCm isn't *great* but it also isn't useless. The biggest issues are simply 3rd party developers refusing to work with ROCm properly because they've already chosen to use CUDA for one reason or another and don't want to do the work twice. The CUDA<>ROCm translation layer we saw a few weeks ago proves this. The fact that AMD hardware got that close to their NVidia counterparts using a translation layer while the "officially supported" ROCm workloads were slower is just sad to see.

            Comment


            • #16
              Originally posted by sobrus View Post

              I have both CPU and GPU from AMD, but don't quite think you can call Radeons "best GPUs". AMD GPUs are nowhere near nVidia. They barely match GeForce in games.

              There was a time when ATi dominated with R9700, but these are long gone. nVidia was first to bring DX10 class card to PC (Geforce 8), along with CUDA, something that is still out of reach for AMD, in 2006. They also were first to bring ray-tracing and tensor cores to consumer cards. They also were first to bring DLSS and Frame Generation...First to have dual issue units, their tensor cores can work parallel with compute cores... AMD is just copying their ideas, few years later. AMD was not present in high end GPUs for years, and it seems like this will be again the case with RDNA4. I don't think it's because AMD has superior technology that they don't want to use.

              But at least they do have open source drivers for linux, and are usually reasonably priced...

              I don't blame nVIdia that they want to make money on excellent solutions they offer. They earned it, CUDA is 18 years old already and was working nicely on each and every GPU generation since.
              Originally posted by Daktyl198 View Post

              First, Nvidia GPUs generally have better hardware all around. I use and love my AMD GPU on Linux, but NVidia just packs more and better stuff into their GPUs. We're not going to talk about price/performance here.

              As for the AI and compute stuff, AMD hardware is actually perfectly viable for it, and ROCm isn't *great* but it also isn't useless. The biggest issues are simply 3rd party developers refusing to work with ROCm properly because they've already chosen to use CUDA for one reason or another and don't want to do the work twice. The CUDA<>ROCm translation layer we saw a few weeks ago proves this. The fact that AMD hardware got that close to their NVidia counterparts using a translation layer while the "officially supported" ROCm workloads were slower is just sad to see.
              typo, but can't find how to edit the comment.

              Comment


              • #17
                Originally posted by sobrus View Post

                AMD has done a good job here, bringing some nice technology to wider audience. FSR is nice too, really good given it works without tensor cores and optical flow unit. They have talented software engineers there. But yes, a bit of innovation wouldn't hurt. nVidia has been leading this segment for over 20 years.

                btw. I've edited my post (that you've quoted), because it sounded too harsh. I do like AMD. And I agree with the rest of what @Railander​ has written.
                I crap on AMD all the time because their broader GPU compute strategy / execution has been a dumpster fire for many years. But I don't do it out of malice. Last night I bought my 11th, 12th, an 13th RDNA 1 or newer AMD GPU. A Radeon Pro W5500 as a second GPU in my desktop to play with stuff like LookingGlass, and a pair of 5700 XTs. Getting their act together with ROCm would be good for us as users and even better for AMD's financials.

                Comment


                • #18
                  Originally posted by Daktyl198 View Post

                  First, Nvidia GPUs generally have better hardware all around. I use and love my AMD GPU on Linux, but NVidia just packs more and better stuff into their GPUs. We're not going to talk about price/performance here.

                  As for the AI and compute stuff, AMD hardware is actually perfectly viable for it, and ROCm isn't *great* but it also isn't useless. The biggest issues are simply 3rd party developers refusing to work with ROCm properly because they've already chosen to use CUDA for one reason or another and don't want to do the work twice. The CUDA<>ROCm translation layer we saw a few weeks ago proves this. The fact that AMD hardware got that close to their NVidia counterparts using a translation layer while the "officially supported" ROCm workloads were slower is just sad to see.
                  NVIDIA's Compute Unified Device Architecture (CUDA) has long been the de facto standard programming interface for developing GPU-accelerated software. Over the years, NVIDIA has built an entire ecosystem around CUDA, cementing its position as the leading GPU computing and AI manufacturer. However, r...


                  Comment


                  • #19
                    Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post

                    They don't really care about Tiny Corp specifically. But they do care that a quasi famous mouthpiece (from his iOS / PlayStation days) that can generate a lot of negative tech media press is (rightfully) crapping all over AMD's GPU compute ecosystem in public.
                    If this ends up improving the AMD Linux driver stack by Bastille Day - it's a success.

                    Comment


                    • #20
                      Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post

                      NVIDIA's Compute Unified Device Architecture (CUDA) has long been the de facto standard programming interface for developing GPU-accelerated software. Over the years, NVIDIA has built an entire ecosystem around CUDA, cementing its position as the leading GPU computing and AI manufacturer. However, r...

                      That would be hilarious, if it wasn't so awful. And it shows just how far that gap in compute closed with ZLUDA. That being said, I'm not sure how much that clause in the EULA can be enforced, legally speaking. ZLUDA doesn't modify the CUDA binary in the same way that WINE doesn't modify windows binaries. NVidia can't legally require you to not run other pieces of software on your device. That's why most tech makers artificially lock things down via software.

                      There might be an argument to be made that you're using the software outside of it's intended use, but again, as long as you're not hacking into it I don't think there would be a strong case.

                      Comment

                      Working...
                      X