Announcement

Collapse
No announcement yet.

AMD Ryzen 7 5800X3D Continues Showing Much Potential For 3D V-Cache In Technical Computing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by piotrj3 View Post

    But if you look at worst case scanarios (1% and 0.1% lows) 5800X3D isn't 10% ahead, it is often more like 30% ahead. This is why it is often called gamer cpu, because gamers more care about consistent gameplay then few higher average FPS.
    Isn't this a false equivalency? Thousands of functions are being called every frame, hence a single branch prediction fail to memory isn't going to cause a stutter. Its about the aggregate... and I don't understand why the slowest frames would have proportionally more "misses." Are you saying the game hits more of those unpredictable functions during the bad frames?


    I would think its about the cache holding more code/data, so when the engine does something atypical, whatever its churning through is more likely to already be in the cache.

    Comment


    • #32
      Originally posted by vsteel View Post

      Power limitations of the socket.
      Thank you!

      Comment


      • #33
        Originally posted by birdie View Post

        It is the last CPU for the AM4 platform. Define "the best" please. It's faster in some tasks but lots of people don't even need that level of performance in the first place and would be happy with something like Ryzen 5500 which costs just $160. 5800X3D on average won't be three times faster.

        And God forbid we remember that Intel still exists and offers excellent CPUs like Core i3 12100 which costs just $122 and has a built-in GPU which can only be found on much more expensive AMD APUs like 5600G and 5700G.
        God forbid I have to buy another motherboard versus upgrading my current AM4 to something better. I could care less that Intel exists -- not for fanboy reasons, but because I'd have to spend an extra $130 (to get an equivalent motherboard to what I own on AM4) to use an Intel CPU. I could buy high-end AMD for the price I'd have to spend to buy mid-range Intel. I'm just trying to decide what I'll eventually toss into my AM4 box before I retire it from upgrades.

        Also, the Core i3 12100 is mostly a downgrade from my current APU, a Ryzen R5 4650G. I'm not even considering the 5600G because of that.

        Ignoring not having an iGPU, for me on a general purpose desktop the "best" has the best balance in regards to number of cores, processing speed, and available features. 5800X3D seems to have that balance with the 5000 series. While neither the highest core count or the fastest clocked 5000, the stacked cache gives it an edge that makes it arguably the "best" 8c/16t AM4.

        I'm tied between the 5700G and the 5800X3D as the best general purpose desktop AM4. To me the base 5800 model is rather moot -- compared to the 5700G it just has some more cache but it loses the iGPU. I'd rather have the iGPU. It's basically GPU redundancy and VM options versus 1-3% faster Zstd.

        I'll be honest with you, fastest clocked AM4 5000 is kind of moot, too. Precision Boost Overdrive is friggin simple to use and lets anyone willing to watch a YouTube video the ability to OC their Ryzen to its safe limits.

        Comment


        • #34
          Originally posted by EvilHowl View Post

          I don't think AMD is going to release more products for the AM4 lineup given that AM5 is almost here, though we don't how much time is AM4 going to be around once AM5 gets released.

          I think AMD will stop producing the 5800X3D as soon as they release any 3D V-Cache CPU, but they might continue making CPUs and APUs to fill the entry-level market. I don't think they will release cheap AM5 products at launch, knowing that it's going to be a DDR5 only platform and though DDR5 prices are quickly improving, they are not very good. So it wouldn't make sense to release cheap SKUs if the end user has to pay a premium for the RAM.

          So, I guess the 5800X3D will remain as the best AM4 gaming CPU and the 5700G will remain as the best AM4 APU.
          That's what I'm thinking, but I'd love to be wrong.

          Comment


          • #35
            Originally posted by skeevy420 View Post

            God forbid I have to buy another motherboard versus upgrading my current AM4 to something better. I could care less that Intel exists -- not for fanboy reasons, but because I'd have to spend an extra $130 (to get an equivalent motherboard to what I own on AM4) to use an Intel CPU. I could buy high-end AMD for the price I'd have to spend to buy mid-range Intel. I'm just trying to decide what I'll eventually toss into my AM4 box before I retire it from upgrades.

            Also, the Core i3 12100 is mostly a downgrade from my current APU, a Ryzen R5 4650G. I'm not even considering the 5600G because of that.

            Ignoring not having an iGPU, for me on a general purpose desktop the "best" has the best balance in regards to number of cores, processing speed, and available features. 5800X3D seems to have that balance with the 5000 series. While neither the highest core count or the fastest clocked 5000, the stacked cache gives it an edge that makes it arguably the "best" 8c/16t AM4.

            I'm tied between the 5700G and the 5800X3D as the best general purpose desktop AM4. To me the base 5800 model is rather moot -- compared to the 5700G it just has some more cache but it loses the iGPU. I'd rather have the iGPU. It's basically GPU redundancy and VM options versus 1-3% faster Zstd.

            I'll be honest with you, fastest clocked AM4 5000 is kind of moot, too. Precision Boost Overdrive is friggin simple to use and lets anyone willing to watch a YouTube video the ability to OC their Ryzen to its safe limits.
            5900X overall is a much better CPU than 5800X3D. More real cores, costs now even less than 5800X3D.

            Comment


            • #36
              Originally posted by birdie View Post

              5900X overall is a much better CPU than 5800X3D. More real cores, costs now even less than 5800X3D.
              I'm aware I'm about to contradict myself from my statement above, but I use enough inline compression that I'd rather have the V-Cache over the 4 extra cores. V-Cache really isn't much of a gaming feature as much as it is just great overall -- it doesn't hurt performance and may dramatically increase it.

              In my experiences for gaming and generalized use those 4 extra cores will be moot and next to useless. My 6c/12t 4650G is already a perfectly adequate as a gaming CPU and doubling my current core count won't make my 4K upscaled from 1080p60 games magically run at native 4K60...I wish it worked like that.

              Comment


              • #37
                Originally posted by atomsymbol View Post

                Threadripper (LGA 4096) cost isn't a good indicator of the cost of a hypothetical AM4 with 4-channel DDR4, because Threadripper supports 4 PCIe-x16 slots while AM4 supports 1 PCIe-x16 slot.
                You could just reuse the socket and not populate the extra lanes. The same physical socket is shared with Epyc as well. The original argument was that it would have been "cheaper" to do a 4 channel Ryzen compared to adding the additional cache stack. I would argue that it would be "cheaper" to do a cut down Threadripper that uses the same socket that designing a new socket and die for 4 channel Ryzen.

                Comment


                • #38
                  Yep... going with a mid-sized package that was sufficiently smaller to make a big cost difference would also require a third (mid-sized) I/O die somewhere between the client (125mm^2 ?) and server (416mm^2 ?) I/O dies we have today.

                  The question marks are because I am going from memory.
                  Test signature

                  Comment


                  • #39
                    Something is off with the xmrig benchmarks. Neither of them are near where they should be, at least for the Monero variant, and other benchmarks have shown that the 5800X3D variant is roughly on-par with the 5800X if not a tad slower (not counting overclocking). This is because the RandomX algorithm already uses 2MB/thread which the 5800X can handle fine.

                    Basically, don't get the 5800X3D expecting faster monero mining performance. I've been able to personally confirm this .

                    https://xmrig.com/benchmark?cpu=AMD+...Core+Processor
                    https://xmrig.com/benchmark?cpu=AMD+...Core+Processor

                    EDIT: That said, it's possible with undervolting that the X3D can be more power efficient per hashrate. Depends on the motherboard etc and what they allow you to set.

                    Comment


                    • #40
                      Originally posted by brucethemoose View Post
                      I would think its about the cache holding more code/data, so when the engine does something atypical, whatever its churning through is more likely to already be in the cache.
                      Have a look at this.
                      https://www.youtube.com/watch?v=t_rzYnXEQlE

                      Not sharing stuff that is already in memory somewhere else is an industry standard apparently since the inception of large 3D games. We can speculate that games that stutter because of I/O (yes, after removing shader compilation stutters), with nothing particularly fancy on screen to justify it (say, effectively hundreds or thousand of objects/units with a complex behavior attached, implying descriptors with various pointers and so on), have nasty code with tons of duplication, probably a broken object model, and generally rushed routines written months before shipping.

                      Comment

                      Working...
                      X