Announcement

Collapse
No announcement yet.

AMD Ryzen 5 8600G Linux Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    From the article:
    With the Ryzen 7 8700G you get 2 more cores / 4 more threads over the 8600G plus +100MHz on the boost clock while the 8600G has a +100MHz advantage on the base clock.
    The advantage is illusory. If you used affinity to prevent anything from being scheduled on 2 cores of the 8700G and loaded up the other 6, it would almost certainly hit the same frequency or better than a similar load on the 8600G.

    The base frequency is just telling you the minimum frequency for a heavy all-core workload. Since the two CPUs have the same TDP and are identical except for one of them having 2 cores disabled, you can bet the other would behave similarly if you likewise avoided using two of its cores.​

    Comment


    • #22
      Originally posted by artivision View Post
      The correct way to test APUs is with Ram speed synchronized with the SoC Bus at max speed, this is 3Ghz Bus combined with 6Ghz Ram.
      Not sure how you define correct? AMD released them with DDR5-5200 for 2 dimms and DDR5-3600 for 4, everything higher is overclocking and not guarantied. And who would buy an apu and spend that much money for some 5 - 10% gains in a few workloads? Not speaking of the instability issues that come with overclocking.
      ​Everyone want to know if they can match low eng Gpus with some perfect configuration.
      Yes a few low end GPUs like RX 560, 6500 XT or a GTX 16** would have been appreciated. But I don't see the point in comparing the 8000G with expensive tuning against a possibly second hand cheap dGPU system, we already know how that ends.

      Originally posted by grigi View Post
      Interesting that the GPU on the 8700G isn't ~50% faster as the specs indicate it should. It seems to be between 20% and 35% faster only.

      Is the 8700G *that* bandwidth starved?
      Depends on the games bandwidth needs, for some games the 8000G sucks because the bandwidth is less than half of competing dGPUs (remember the CPU also needs bandwidth). There are bottlenecks that aren't scaled up proportionally like TDP and bandwidth, this limits your potential 50% gains.
      But this chip isn't made for 4K120 highest settings, you should be ok with 1080p at medium settings and the occasional AAA game that needs lowest settings and fsr for 30 fps.

      Originally posted by gukin View Post
      On the other hand, AMD skipped releasing the 4000 APUs to retail because they couldn't beat the Vega 11 on Raven.
      What are you talking about? I could buy one right now: https://geizhals.eu/?cat=cpuamdam4&xf=16686_Ryzen+4000 and they were faster then vega 11 although not much.

      The pricing for the 8000Gs is sad and the only reason you can get a cpu + dGPU for the same money that's faster. As soon as the 8700G comes down to 200 € you wont find a faster dGPU system. But this wasn't much different with 5700G.

      Comment


      • #23
        Originally posted by Anux View Post
        Depends on the games bandwidth needs, for some games the 8000G sucks because the bandwidth is less than half of competing dGPUs (remember the CPU also needs bandwidth). There are bottlenecks that aren't scaled up proportionally like TDP and bandwidth, this limits your potential 50% gains.
        TDP should not be a big factor at desktop level TDPs for this apu. It's mostly bandwidth starved because as you said the CPU uses memory too, and it has lower bandwidth than competing dGPUs.

        Then there's also the way the memory controller is configured. e.g. the zen2 cores in Steam Deck has much higher memory access latencies than notebooks with renoir and LPDDR4. The memory controller is optimized for GPU above CPU.
        In a regular APU like this, the memory controller is optimized for CPU first.

        Regardless, It still shows that the memory bandwidth for the 8600G was about right, and the 8700G is starved.
        Makes sense that the next gen "normal" APU is only going up to 16SM's from 12, and the next gen "mega" APU is going to have a 256b memory bus, it will really need it.

        Comment


        • #24
          Originally posted by grigi View Post
          TDP should not be a big factor at desktop level TDPs for this apu.
          Certainly not in all cases.

          Regardless, It still shows that the memory bandwidth for the 8600G was about right, and the 8700G is starved.
          Even the 8600G is starved, as said depending on the game. https://www.youtube.com/watch?v=MFzegmwHxPM gamers nexus tested a few ram configs.

          Comment


          • #25
            Originally posted by grigi View Post
            Regardless, It still shows that the memory bandwidth for the 8600G was about right, and the 8700G is starved.
            Makes sense that the next gen "normal" APU is only going up to 16SM's from 12, and the next gen "mega" APU is going to have a 256b memory bus, it will really need it.
            What I want to know is this: Where is the Infinity Cache?

            After Infinity Cache helped RDNA2 dGPUs make a huge leap in performance, I fully expected AMD iGPUs to follow this path. In fact, I thought maybe they would even switch their higher-end laptop CPUs to being chiplet-based and include a small dGPU die, like the RX 6500, as a chiplet in-package. Recall that the RX 6500 has only a 64-bit memory bus, but gets by with the aid of 16 MB of on-die SRAM, as L3 cache.

            Comment


            • #26
              Originally posted by coder View Post
              What I want to know is this: Where is the Infinity Cache?

              After Infinity Cache helped RDNA2 dGPUs make a huge leap in performance, I fully expected AMD iGPUs to follow this path. In fact, I thought maybe they would even switch their higher-end laptop CPUs to being chiplet-based and include a small dGPU die, like the RX 6500, as a chiplet in-package. Recall that the RX 6500 has only a 64-bit memory bus, but gets by with the aid of 16 MB of on-die SRAM, as L3 cache.
              For connecting the cache dies there needs to be enough empty space on the compute die, which is the case on the big core dies but C-cores and monolithic designs are optimized for size, and can't fit the connections. AMD would have to make a special design for it or make all monolithic dies bigger and more expensive.

              Though I would love a G3D part and I bet we will see something like this in the not so near future. Strix Halo might have quad channel DDR5 but I think AM5 doesn't support more than 2 channels, maybe we get a version with 2 channels and V-cache or its mobile only.

              Comment


              • #27
                Why has AMD only ever supported dual channel compared to Intel who have done quad for decades.

                Seems to me AMD have held the fort quite well with dual and quad was a thing for a spell, but not any more?
                Hi

                Comment


                • #28
                  Originally posted by stiiixy View Post
                  Why has AMD only ever supported dual channel compared to Intel who have done quad for decades.
                  Can you be more specific? Both offer (and offered in the past) dual and quad platforms. We even have octa channel from both.

                  Comment


                  • #29
                    Originally posted by Anux View Post
                    Can you be more specific? Both offer (and offered in the past) dual and quad platforms. We even have octa channel from both.
                    Jees, I was really tired asking nonsensical things! While that wasn't what I was asking, I'll follow through anyway.


                    Desktop.

                    Did AMD offer quad there?

                    The reason I was asking was because AMD seem to be performing quite well with dual compared to Intel's quad memory channel. An old and probably almost irrelevant concept now.

                    And you're potentially throwing a spanner in my works! AMD and quad channel!?
                    Hi

                    Comment


                    • #30
                      Originally posted by stiiixy View Post
                      Did AMD offer quad there?
                      Only on HEDT for both I would say.

                      Comment

                      Working...
                      X