Announcement

Collapse
No announcement yet.

AMD Ryzen 7 5800X3D On Linux: Not For Gaming, But Very Exciting For Other Workloads

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • willmore
    replied
    Originally posted by nicalandia View Post

    LZ4 Does not appear to take advantage of the 3D V-Cache

    LZ4.jpg
    LZ4 is very cache friendly. It reads through its input buffer and coppies that to the output buffer and occasionally reads back a little bit in the output buffer and coppies that to the current end of the output buffer. This makes it very tolerant of cache eviction, etc. It's very easy for simple fetch predictors to keep up with the way LZ4 accesses memory. Other compression programs use different methods with much larger cache footprints, but LZ4 is not one of them. It's suitable for microcontrollers, etc. If you have enough memory for the input buffer and the output buffer, you can do LZ4 without extra storage. You can't do that with BZ2 or other transform based systems.

    Originally posted by Raka555,n1320985
    Being a bit pedantic here but the app don't "take advantage" of larger cache.
    It is more like bloated apps that require larger caches.

    If LZ4 was well written, then you won't see much of a boost.
    You're speaking in the context of compression programs here and you are dead wrong. Saying that a compression program is using too much memory is like saying that an algorythm is lazy because it didn't find a way to solve the Traveling Salesman problem in polynomial time. I don't want to quote the Cat in the Hat, but he's right.

    To address your overly pedantic opinion, a program is well written if it takes advantage of the hardware on which it runs. LZ4 doesn't happen to be able to do that as it's already fully satisfied by even a basic processor, but that's because it's designed for a little processor. Honestly, it's a bit silly to be using it as a benchmark in this way. It's about as good of a benchmark as 'grep /proc/cpuinfo "cpu MHz"'

    Leave a comment:


  • chocolate
    replied
    Great companion to AM4 motherboards until their end of life!

    I think it's just as great for Linux gaming as it is for Windows gaming in all modern, more "bloated" AAA games.
    It's kind of a wash with the 5800X at stock frequencies for older games because those supposedly already "fit" in the constraints of abysmally poor Intel caches from the early Core days, and benefit from higher clocks in CPU-bound scenarios, but their absolute performance is probably already above v-sync for any reasonable monitor.

    Michael, if I'm not mistaken, your own testing in the past showed how Rise/Shadow of the Tomb Raider scaled well with RAM frequencies. I fully expected it to perform better with a larger cache; the game was basically begging for data non-stop (as for the cause, we can only speculate, but that's how it seems to be, and it's not an isolated case in the industry nowadays).
    Therefore, I'm not convinced it's appropriate to reiterate how poor the 5800X3D performs for "Linux gaming" (in such a general sense) so many times in the article, given the set of games benchmarked.
    Using it for the same non-native titles that other reviewers have benchmarked on Windows would surely bring the same benefits. Unfortunately, those titles probably cannot be automated.

    Given that the results here already hint at industry trends quite accurately, perhaps it would be best to remain cautious instead of disregarding the 5800X3D for a gaming usecase.
    For example, Strange Brigade has always been praised for its performance relative to its graphics, and apart from using Vulkan (the main, simplistic explanation some reviewers love to resort to), cache-friendliness may very well be a part of that, hence its slight decrease under the 5800X3D compared to the 5800X.

    Cheers.

    Leave a comment:


  • atomsymbol
    replied
    It is probable that large (by year-2022 measures) L3 caches will become a standard feature in future CPUs because the 5800X3D has only a few downsides compared to 5800X. 3% lower performance in 90% of cases is a reasonable tradeoff for enabling 25-50% higher performance (instructions per clock) in 10% of other cases.

    Leave a comment:


  • atomsymbol
    replied
    Originally posted by Raka555 View Post
    There is nothing to learn here as a developer.
    A developer of what?

    Leave a comment:


  • nicalandia
    replied
    Would be interesting to test to see if there is any Performance boost on Windows Games running on Linux + Proton/Wine(SteamOS) vs Windows 10/11 with 3D V-Cache.
    Last edited by nicalandia; 25 April 2022, 03:12 PM.

    Leave a comment:


  • Michael
    replied
    Originally posted by domih View Post
    Thanks for this article as well as the one on Milan X!

    I'm not a gamer so I'm mostly interested in the possible performance increase brought by 3D V-cache in development related tools and servers. What about (beyond ML/DL):
    - JSON parsing (in various languages),
    - XML parsing and other XML operations (in various languages),
    - MySQL, PostgreSQL,
    - Cassandra,
    - Large Python or PHP list and dict handling,
    - JIT compilation in Java, Python, PHP,
    - Crypto (AES, RSA),
    - JavaScript,
    - Web servers (Apache, Nginx).
    Click the openbenchmarking.org link on the last page of the article, some of those are covered. Others are coming.

    Also navigating from this page will also yield in-progress metrics for other tests - https://openbenchmarking.org/s/AMD+R...5800X3D+8-Core

    Leave a comment:


  • pete910
    replied
    Not surprised as to see little gaming increase compared to windows with that collection of games.

    The workload tests was a surprise as that looked to be the opposite of windows. Having said that Phoronix does better/broader benchys compared to the windows guys.

    Leave a comment:


  • domih
    replied
    Thanks for this article as well as the one on Milan X!

    I'm not a gamer so I'm mostly interested in the possible performance increase brought by 3D V-cache in development related tools and servers. What about (beyond ML/DL):
    - JSON parsing (in various languages),
    - XML parsing and other XML operations (in various languages),
    - MySQL, PostgreSQL,
    - Cassandra,
    - Large Python or PHP list and dict handling,
    - JIT compilation in Java, Python, PHP,
    - Crypto (AES, RSA),
    - JavaScript,
    - Web servers (Apache, Nginx).

    Leave a comment:


  • fuzz
    replied
    Originally posted by brucethemoose View Post
    The only games I'm interested in buying such a CPU for are logic/simulation bound ones: Rimworld, modded Minecraft, Distant Worlds 2, Stellaris, Satisfactory, Starsector and so on. Games that need every bit of single core performance they can to scale up.
    I'm going to try Dwarf Fortress on it, which typically runs into limits for map size/population pretty easily due to these reasons exactly. I think Rimworld is the same
    Last edited by fuzz; 28 April 2022, 11:47 AM.

    Leave a comment:


  • brucethemoose
    replied
    The only games I'm interested in buying such a CPU for are logic/simulation bound ones: Rimworld, modded Minecraft, Distant Worlds 2, Stellaris, Satisfactory, Starsector and so on. Games that need every bit of single core performance they can to scale up.



    Hence I don't get this fascination with benching GPU bound AAAs on low at low res, especially with a focus on FPS instead of frametimes... and I'm not just talking about Phoronix.

    Leave a comment:

Working...
X