Originally posted by Raka555
View Post
Announcement
Collapse
No announcement yet.
AMD Ryzen 7 5800X3D On Linux: Not For Gaming, But Very Exciting For Other Workloads
Collapse
X
-
Originally posted by miskol View PostIt will be nice to see benchmarks 5800X vs 5800X3D on same frequency
So we see how much V- cache add
as 5800X and 5800X3D has different frequency
or downclock 5800X
https://www.youtube.com/watch?v=sw97hj18OUE
Comment
-
Originally posted by skeevy420 View Post
Zstd as well. Like Michael points out in the article, that probably had good benefits for file systems using Zstd for compression. I wonder if LZ4, XZ, and other codecs get performance improvements as well.Originally posted by ResponseWriter View Postzram with zstd as well, I'd imagine.
Originally posted by atomsymbol View PostIt is probable that large (by year-2022 measures) L3 caches will become a standard feature in future CPUs because the 5800X3D has only a few downsides compared to 5800X. 3% lower performance in 90% of cases is a reasonable tradeoff for enabling 25-50% higher performance (instructions per clock) in 10% of other cases.
Buuuut, maybe AMD could totally dominate the mainstream laptop and entry-level desktop gaming graphics markets if they stacked an SRAM die on an APU as SLC/inifinity cache. Might give 128-bit LPDDR5 a lot of legs.
- Likes 5
Comment
-
Originally posted by skeevy420 View PostZstd as well. Like Michael points out in the article, that probably had good benefits for file systems using Zstd for compression. I wonder if LZ4, XZ, and other codecs get performance improvements as well.
The zstd performance boost will be from longest-match searches, within a dictionary that fits in cache. That's why -8 outperformed a 5950, but -16 (or whatever the other value was) was a wash against the stock 5800.
"Simple" LZ, like LZ4, just uses the best recent match within a very small block (in some cases, THE most recent), and that'll generally be in L*2*, let alone needing half a gig of L3.Last edited by arQon; 26 April 2022, 03:12 AM.
- Likes 2
Comment
-
Originally posted by Raka555 View PostBeing a bit pedantic here but the app don't "take advantage" of larger cache.
It is more like bloated apps that require larger caches.
If LZ4 was well written, then you won't see much of a boost.
Next time, maybe learn even *basic* concepts before trying to pretend you're a 1337 h4x0r? This isn't even 101-level stuff.
- Likes 2
Comment
-
Originally posted by yump View PostGoing by die area and packaging complexity, the 5800X3D may cost as much to make as the 5950X does. And personally, I'd rather have twice the cores.
Up to a point, die is die. Xeons used to not waste die space on IGP so they could use it for (at the time, massive) additional L3 instead. Vertical stacking doesn't change that as much as an optimistic view would like, because you still have to shed the heat from it. We're not going to be adding a new layer each year for the next 40 years like we did with transistor shrinks.
> Buuuut, maybe AMD could totally dominate the mainstream laptop and entry-level desktop gaming graphics markets if they stacked an SRAM die on an APU as SLC/inifinity cache.
I'm not sure that's sensible even in a perfect world, let alone one where it wouldn't put your "entry-level" APU at the same cost as a CPU+(trash tier)GPU combo. IDK what the typical texture cache hit rate is for a GPU these days, but I expect it's "high enough". If you want your entire collection of atlases in there though, plus geometry, plus shaders, etc - yeah, it's probably going to be cheaper to just use VRAM and a dedicated chip.
Besides, you're talking about the one market AMD already has no competition in. Seems kinda silly to price themselves OUT of it for no reason after all that hard work.
Comment
-
Originally posted by F.Ultra View PostCould also be that this is simply maxing out the LZ4 performance on the CPU, aka the latency of the compression/decompression with the speed of this cpu is just at the threshold where more cache doesn't help, aka the prefetch is faster or as fast as the algorithm.
Originally posted by qrQon View PostLZ4, unlikely. XZ I'm not familiar enough with to say.Last edited by willmore; 26 April 2022, 07:34 AM.
- Likes 3
Comment
-
Originally posted by EvilHowl View PostI must say I'm impressed with the 5800X3D. It can easily beat the 12900K in gaming, which is exactly what AMD claimed, while drawing less power. It can be a drop-in replacement for almost any AM4 board and it isn't as RAM dependant as other SKUs are (mainly because of its big pool of L3 cache), although it's a little pricy, but it's a top-of-the-line CPU, anyway.
I don't think we have ever seen such a versatile platform. AM4 really delivered!
Comment
Comment