Announcement

Collapse
No announcement yet.

AMD Ryzen 7 5800X3D On Linux: Not For Gaming, But Very Exciting For Other Workloads

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Raka555 View Post

    Being a bit pedantic here but the app don't "take advantage" of larger cache.
    It is more like bloated apps that require larger caches.
    Isn't the data also using the cache, so a larger lookup window will be better used by a bigger cache than a smaller one that can't fit all?

    Comment


    • #42
      Originally posted by miskol View Post
      It will be nice to see benchmarks 5800X vs 5800X3D on same frequency
      So we see how much V- cache add
      as 5800X and 5800X3D has different frequency
      or downclock 5800X
      This video by Hardware Unboxed might be of interest...
      https://www.youtube.com/watch?v=sw97hj18OUE

      Comment


      • #43
        Originally posted by skeevy420 View Post

        Zstd as well. Like Michael points out in the article, that probably had good benefits for file systems using Zstd for compression. I wonder if LZ4, XZ, and other codecs get performance improvements as well.
        Originally posted by ResponseWriter View Post
        zram with zstd as well, I'd imagine.
        I doubt any of the in-line zstd stuff will benefit. IIRC, btrfs with compression limits extents to 128 KiB max for seekability, and zram compresses 32 KiB at a time by default, or 4 KiB at a time if you use the page-cluster=0 sysctl tweak from Android.

        Originally posted by atomsymbol View Post
        It is probable that large (by year-2022 measures) L3 caches will become a standard feature in future CPUs because the 5800X3D has only a few downsides compared to 5800X. 3% lower performance in 90% of cases is a reasonable tradeoff for enabling 25-50% higher performance (instructions per clock) in 10% of other cases.
        Going by die area and packaging complexity, the 5800X3D may cost as much to make as the 5950X does. And personally, I'd rather have twice the cores.

        Buuuut, maybe AMD could totally dominate the mainstream laptop and entry-level desktop gaming graphics markets if they stacked an SRAM die on an APU as SLC/inifinity cache. Might give 128-bit LPDDR5 a lot of legs.

        Comment


        • #44
          Originally posted by skeevy420 View Post
          Zstd as well. Like Michael points out in the article, that probably had good benefits for file systems using Zstd for compression. I wonder if LZ4, XZ, and other codecs get performance improvements as well.
          LZ4, unlikely. XZ I'm not familiar enough with to say.

          The zstd performance boost will be from longest-match searches, within a dictionary that fits in cache. That's why -8 outperformed a 5950, but -16 (or whatever the other value was) was a wash against the stock 5800.

          "Simple" LZ, like LZ4, just uses the best recent match within a very small block (in some cases, THE most recent), and that'll generally be in L*2*, let alone needing half a gig of L3.
          Last edited by arQon; 26 April 2022, 03:12 AM.

          Comment


          • #45
            Originally posted by Raka555 View Post
            Being a bit pedantic here but the app don't "take advantage" of larger cache.
            It is more like bloated apps that require larger caches.

            If LZ4 was well written, then you won't see much of a boost.
            Wow, you do not understand compression *at all*.

            Next time, maybe learn even *basic* concepts before trying to pretend you're a 1337 h4x0r? This isn't even 101-level stuff.

            Comment


            • #46
              Originally posted by yump View Post
              Going by die area and packaging complexity, the 5800X3D may cost as much to make as the 5950X does. And personally, I'd rather have twice the cores.
              Horses for courses. I'm with you, but gamer-me of years gone by would absolutely take the higher framerate over 8 more cores that are sitting idle 99% of the time.

              Up to a point, die is die. Xeons used to not waste die space on IGP so they could use it for (at the time, massive) additional L3 instead. Vertical stacking doesn't change that as much as an optimistic view would like, because you still have to shed the heat from it. We're not going to be adding a new layer each year for the next 40 years like we did with transistor shrinks.

              > Buuuut, maybe AMD could totally dominate the mainstream laptop and entry-level desktop gaming graphics markets if they stacked an SRAM die on an APU as SLC/inifinity cache.

              I'm not sure that's sensible even in a perfect world, let alone one where it wouldn't put your "entry-level" APU at the same cost as a CPU+(trash tier)GPU combo. IDK what the typical texture cache hit rate is for a GPU these days, but I expect it's "high enough". If you want your entire collection of atlases in there though, plus geometry, plus shaders, etc - yeah, it's probably going to be cheaper to just use VRAM and a dedicated chip.

              Besides, you're talking about the one market AMD already has no competition in. Seems kinda silly to price themselves OUT of it for no reason after all that hard work.

              Comment


              • #47
                Originally posted by F.Ultra View Post
                Could also be that this is simply maxing out the LZ4 performance on the CPU, aka the latency of the compression/decompression with the speed of this cpu is just at the threshold where more cache doesn't help, aka the prefetch is faster or as fast as the algorithm.
                Well, yes, that's exactly it. This particular program just doesn't need much cache as it's designed to work well on very small processors without any cache. Its working set probably fits in the L1 of most desktop processors these days. Certainly within the L2. If anything the 5800x3d will run this slower (because of clocks and increased L3 latency)--which is completely expected.

                Originally posted by qrQon View Post
                LZ4, unlikely. XZ I'm not familiar enough with to say.
                xz on the highest setting uses a dictionary of 64MiB and can use a working space almost 11 times that, so I'd think it would benifit from as much L3 as you can throw at it. It would actually be a very interesting test to just run xz at different levels and compare the two processors. It's make an interesting line graph, IMHO.
                Last edited by willmore; 26 April 2022, 07:34 AM.

                Comment


                • #48
                  Originally posted by EvilHowl View Post
                  I must say I'm impressed with the 5800X3D. It can easily beat the 12900K in gaming, which is exactly what AMD claimed, while drawing less power. It can be a drop-in replacement for almost any AM4 board and it isn't as RAM dependant as other SKUs are (mainly because of its big pool of L3 cache), although it's a little pricy, but it's a top-of-the-line CPU, anyway.

                  I don't think we have ever seen such a versatile platform. AM4 really delivered!
                  Yep - AM4 will be accorded iconic status by PC historians IMO.

                  Comment


                  • #49
                    Typo on last page Michael ...

                    "AMD #D V-Cache"

                    You used '#' instead of '3'

                    Comment


                    • #50
                      Michael : what about linux native version of some games used for benchmarks ? I think about Shadow of the tomb raider for example. I wonder when I red that you benchmark games using Proton, so what about wihtout Proton ?

                      Comment

                      Working...
                      X