Announcement

Collapse
No announcement yet.

AMD Radeon RX 6600 Linux Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Phoronix article
    boar power
    Sounds pretty wild.

    The RX 6600 looks like a fine mid range card, if that suggested price is ever going to be real once it hits (or not hits!) the market. Currently all sorts of electronics went through the roof, can't be delivered at all, fricken miners all around buying ship containers full of GPUs away before they're even shipped.

    Also I'd love to see some power consumption numbers. BACO mode (aka Zero Core), idle desktop. Important for me. Moreover, of course, stability tests. Will it wake up nicely from BACO, or some suspend to RAM (S3)?
    Stop TCPA, stupid software patents and corrupt politicians!

    Comment


    • #42
      Sometimes you will get lucky with local stores but generally, online you can expect to pay triple the $329 USD asking price. So somewhere in the $1000 USD range.

      Honestly I'd be happy with a 1650 if someone would sell me one at MSRP at $149. Good luck, I know.

      Comment


      • #43
        Originally posted by TemplarGR View Post
        It is not anecdotal though.
        stormcrow's post is literally the definition of anectodal evidende.

        Originally posted by TemplarGR View Post
        Gigabyte has produced some shitty products in the last decade. I had a gigabyte amd gpu die 18 months from purchase too.
        So did ASUS for instance. My ASUS Maximus MB has been glitchy ever since I got it in 2015 even after some 8 BIOS updates. My ASUS RX570 has already died on me once and the replacement one sometimes locks up after soft reboot. Does this mean that ASUS makes bad products? Technically, yes, but they are no better or worse than similar products from other manufacturers.

        Comment


        • #44
          Does anybody know if the 1060 Michael is benchmarking with is the 3GB or 6GB version of the card? Holding on to my 1060 6GB still here and it's interesting to see how large the gap is getting, but if the 3GB is being benchmarked it changes things a bit. FWIW I'm still playing games at 2560x1440 with the 1060 6GB, and 2d games run fine at 4k 120Hz even, but the 1060 really doesn't cut it for high resolution VR

          Comment


          • #45
            Originally posted by mm0zct View Post
            Does anybody know if the 1060 Michael is benchmarking with is the 3GB or 6GB version of the card? Holding on to my 1060 6GB still here and it's interesting to see how large the gap is getting, but if the 3GB is being benchmarked it changes things a bit. FWIW I'm still playing games at 2560x1440 with the 1060 6GB, and 2d games run fine at 4k 120Hz even, but the 1060 really doesn't cut it for high resolution VR
            It's shown on the system table on the 2nd page
            Michael Larabel
            https://www.michaellarabel.com/

            Comment


            • #46
              Originally posted by TemplarGR View Post

              Nvidia always has throttling issues in all cards. Their throttling is designed in such a way as to perform the best in short bursts in order to get great benchmark results, but if the card gets hot after prolonged use it loses much of its power.
              I think this more has to do with nVidia traditionally having higher boost clocks earlier on in their GPU generations than AMD's equivalent ones at the time (e.g. nVidia 750 Ti upto 1.3 Ghz vs. AMD R9 380 only around 1 Ghz).

              Here's the following observation I've made personally:

              If I ran my [factory-overclocked] 750 Ti in mailbox mode (a.k.a. triple-buffering), which means that even though my games were v-synced to 60 FPS, my card would nevertheless always run with 100% GPU utilization, then games would run really smooth for around the first 15 minutes or so, but regularly drop sharply in performance afterwards.

              Now, when looking at the clocks, I'd see that the chip would run with a constant 1.3 Ghz boost and then drop to below 1 Ghz because of thermal throttling.

              That's why I was forced to revert to double-buffered V-sync when gaming, so that my GPU could actually make use of the boost clocks only when actually required.

              Nowadays, AMD's GPUs also have boost clocks which are quite significant, therefore I expect the same throttling applies to them as well.

              Comment


              • #47
                Originally posted by Linuxxx View Post
                Nowadays, AMD's GPUs also have boost clocks which are quite significant, therefore I expect the same throttling applies to them as well.
                Yeah, since the "review" industry, youtube shills, and other so called "tech journalists" which is an euphemism for "hardware salesmen", never called out Nvidia on this and pretended their hardware was much better than it really was for consumers, AMD had no other option but to do the same thing. Now both manufacturers scam us, that is an improvement, am i right?

                And make no mistake, IT IS A SCAM. No one plays video games in short bursts and the vast majority of AAA video games that are most likely to demand the extra performance are constantly demanding, not in bursts. So for any AAA gamer who actually needs gpu performance, the gpu he buys is effectively running at stock clocks most of the time due to throttling, yet in the reviews they scammed him by providing him with a couple of minute benchmark results....

                Comment


                • #48
                  Originally posted by agd5f View Post

                  For the vast majority of games, the effect is negligible between gen3 and gen4:
                  https://www.techpowerup.com/review/a...caling/28.html
                  That appears to be so, for now. But even the article you linked calls out Hitman 3 and Death Stranding for performance loss, and Hardware Unboxed found a 25% difference in performance in DOOM Eternal when using PCIe 3 vs PCIe 4, also making a reasonable assumption that lack of memory bandwidth is why the 6600XT fell off against Nvidia products if the user had the temerity to increase resolution.

                  ​Id's use of memory bandwidth is currently unusual, but game developers have an ever-growing thirst, and the 6600/XT have a unique issue sating that going forward. Nvidia's cards do not. Again, I hope that AMD considers Infinity Cache a plus, and not a replacement for bus-width in future.

                  Comment


                  • #49
                    Originally posted by Linuxxx View Post

                    I think this more has to do with nVidia traditionally having higher boost clocks earlier on in their GPU generations than AMD's equivalent ones at the time (e.g. nVidia 750 Ti upto 1.3 Ghz vs. AMD R9 380 only around 1 Ghz).

                    Here's the following observation I've made personally:

                    If I ran my [factory-overclocked] 750 Ti in mailbox mode (a.k.a. triple-buffering), which means that even though my games were v-synced to 60 FPS, my card would nevertheless always run with 100% GPU utilization, then games would run really smooth for around the first 15 minutes or so, but regularly drop sharply in performance afterwards.

                    Now, when looking at the clocks, I'd see that the chip would run with a constant 1.3 Ghz boost and then drop to below 1 Ghz because of thermal throttling.

                    That's why I was forced to revert to double-buffered V-sync when gaming, so that my GPU could actually make use of the boost clocks only when actually required.

                    Nowadays, AMD's GPUs also have boost clocks which are quite significant, therefore I expect the same throttling applies to them as well.
                    Honestly, that sounds like your case cooling was just bad. Double power (1.3^3) for 15 minutes is way more thermal inertia than a 750 Ti has, unless your card was a fanless model (factory overclocked = not likely).

                    Comment


                    • #50
                      Originally posted by Linuxxx View Post
                      Nowadays, AMD's GPUs also have boost clocks which are quite significant, therefore I expect the same throttling applies to them as well.
                      Originally posted by TemplarGR View Post
                      And make no mistake, IT IS A SCAM. No one plays video games in short bursts and the vast majority of AAA video games that are most likely to demand the extra performance are constantly demanding, not in bursts. So for any AAA gamer who actually needs gpu performance, the gpu he buys is effectively running at stock clocks most of the time due to throttling, yet in the reviews they scammed him by providing him with a couple of minute benchmark results....
                      Throttling issues are not related to higher clocks but rather a mismatch between the heat a chip generates and the heat that the card's cooling solution can dissipate... or as yump mentioned a mismatch between the heat that the card's cooling solution is dissipating and the case airflow's ability to exhaust that heat outside the case.

                      Some reviewers explicitly warm up the cards under test before starting benchmarks, others like Phoronix run the tests back to back (so the card stays hot) and run them multiple times to achieve a similar effect. If you scroll down to the bottom of the article below (which has good temp/clock vs time graphs) you'll see that the engine clock stays pretty constant during a ~20 minute run despite temperature and fan speed increasing, which suggests that the card's cooling solution is doing its job:

                      https://www.guru3d.com/articles-page...-review,6.html
                      Test signature

                      Comment

                      Working...
                      X