Announcement

Collapse
No announcement yet.

Intel Pentium vs. AMD Ryzen 3 Performance For Linux Gaming

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • partizann
    replied
    Sure there clearly will be others factors, but I think, the latency is one of the major ones.

    Leave a comment:


  • duby229
    replied
    Originally posted by partizann View Post

    All this about CPU clocks and IPC is actually BS. If you want to know why Intel *Lake CPUs game better, you have to look at the memory latency of the architectures.
    Intel Sky/Cofe/... Lake CPUs have around ~50ns latency with DDR4 2400MT/s @CL15, while with same RAM Ryzen CPUs have around 90ns .
    Thats why Skylake-X from X299 platform, even when is basically the same architecture as Skylake, but has different memory architecture, has also higher memory latency - closer to those of the Ryzen CPUs ~80ns, so if clocked similarly as the Ryzen CPUs, they also trail behind in games vs Skylake.
    But Skylakes architecture dont scale as much with faster and lower latency memory as Ryzen does - while a jump RAM speed/latency from a CL15@2400MT/s to CL14@3466MT/s does a few percent improvement on the Skylake architecture, you have tens of % of improvement in games on Ryzen, as you are cutting the latency easily by 30ns.

    And Radeon graphics are even more prone to these latencies as they dont have software based schedulers as NVIDIA has. https://www.youtube.com/watch?v=S6yp7Pi39Z8

    Maybe would be nice, if Michael could run some tests with faster and lower latency memories also on linux games.
    Ah, yeah, That's a pretty good point. But what I said is not BS, it -is- the largest part of the equation. But I do think you are totally correct though otherwise.

    Leave a comment:


  • nuetzel
    replied
    Originally posted by darkbasic View Post

    Drirc was supposed to do so in several of those games.
    In several... --- You know, what I mean. ;-)
    We have it in for BioShock Infinite, but NOT for Openarena, currently.

    Maybe Michael can redo with explicitly mesa_glthread=true enabled.

    Greetings,
    Dieter
    Last edited by nuetzel; 25 January 2018, 07:10 AM.

    Leave a comment:


  • dungeon
    replied
    Originally posted by bridgman View Post
    Please kitty don't be shy we all care about performance and don't give up so soon, someone need to fix all these old millennium CPU vulnerabilities so that negligible performance impact would be expected at least on Epyc CPU

    Just looking right now what has be written in that white paper AMD posted there y-day:

    Last edited by dungeon; 25 January 2018, 06:13 AM.

    Leave a comment:


  • partizann
    replied
    Originally posted by duby229 View Post

    I don't think that's quite right, as you can see that Intel CPU has a higher clock speed and slightly better IPC. I think that's the reason you see AMD cards perform slightly better on the Pentium. That 18FPS difference seems about the right margin for the actual performance difference. But I do think you are right about nVidia though.
    All this about CPU clocks and IPC is actually BS. If you want to know why Intel *Lake CPUs game better, you have to look at the memory latency of the architectures.
    Intel Sky/Cofe/... Lake CPUs have around ~50ns latency with DDR4 2400MT/s @CL15, while with same RAM Ryzen CPUs have around 90ns .
    Thats why Skylake-X from X299 platform, even when is basically the same architecture as Skylake, but has different memory architecture, has also higher memory latency - closer to those of the Ryzen CPUs ~80ns, so if clocked similarly as the Ryzen CPUs, they also trail behind in games vs Skylake.
    But Skylakes architecture dont scale as much with faster and lower latency memory as Ryzen does - while a jump RAM speed/latency from a CL15@2400MT/s to CL14@3466MT/s does a few percent improvement on the Skylake architecture, you have tens of % of improvement in games on Ryzen, as you are cutting the latency easily by 30ns.

    And Radeon graphics are even more prone to these latencies as they dont have software based schedulers as NVIDIA has. https://www.youtube.com/watch?v=S6yp7Pi39Z8

    Maybe would be nice, if Michael could run some tests with faster and lower latency memories also on linux games.

    Leave a comment:


  • bridgman
    replied
    Originally posted by dungeon View Post
    To me all this is so obvious, just you look at first picture i posted in this thread and read nothing else... else is just further explination for these who does not understand what they should look at on that picture

    Leave a comment:


  • darkbasic
    replied
    Originally posted by nuetzel View Post

    If you don't enabled it?
    Drirc was supposed to do so in several of those games.

    Leave a comment:


  • dungeon
    replied
    Originally posted by F i L View Post
    Dengeon, we understand your point. A general consumer might be unpleasantly surprised that a CPU they bought with twice the advertised cores doesn't improve their frame-rate in games.
    With these results here we can say - with amdgpu No, but on nvidia Yes.

    But you keep saying things like "AMD's drivers go slower with more cores".. which, as it's stated, is (probably) false.
    Yes it is false by common sense, but in some cases like this one it is true.

    But when someone tries to explain how that statement is false, you just repeat your first point, and throw in patronizing shit like..
    I am not generalising things here, it is just that this issue looks general since it happens in several cases.

    I honestly now think you might just be trolling, like someone suggested earlier.
    To me all this is so obvious, just you look at first picture i posted in this thread and read nothing else... else is just further explination for these who does not understand what they should look at on that picture



    CPUs are changed and what happens there? Card X goes down, but Card Y goes up If purple looks OK to someone, to me it doesn't.
    Last edited by dungeon; 25 January 2018, 01:11 AM.

    Leave a comment:


  • F i L
    replied
    Dengeon, we understand your point. A general consumer might be unpleasantly surprised that a CPU they bought with twice the advertised cores doesn't improve their frame-rate in games. But you keep saying things like "AMD's drivers go slower with more cores".. which, as it's stated, is (probably) false. But when someone tries to explain how that statement is false, you just repeat your first point, and throw in patronizing shit like..

    Originally posted by dungeon View Post
    Fork your brain too, otherwise please try to focus on anything, can you?
    ..or you say things like..

    Originally posted by dungeon View Post
    Your devs should really testing things with different vendor CPUs, as if you only testing on like you want here on one model by disabling cores you might be but could be also that you will never spot this amdgpu special issue which does not happen on nvidia driver
    ..to an AMD employee, as if your suggestion is some kind helpful insight that the entire technical staff at AMD just hadn't considered before.

    I honestly now think you might just be trolling, like someone suggested earlier..

    Leave a comment:


  • dungeon
    replied
    Originally posted by bridgman View Post
    Oh good grief. The amdgpu+mesa stack does *not* "scale down with more cores". If you run on the same CPU (eg Ryzen 1200 or whatever) with 2 cores enabled vs 4 cores enabled you will get the same or higher performance with 4 cores.
    Yes, theoretically and as expected i agree In practice something else happen here it seems.

    If you want to prove that amdgpu+mesa goes down with more cores you need a graph that shows Ryzen with 2 cores enabled vs same Ryzen with 4 cores enabled, not two different CPUs with two different core designs.
    Well you are not right here Bridgman as this is real world test, artifical test on just one CPU does not prove anything in this case With that you will just pretending that everything is OK, but in practice it isn't...

    Your devs should really testing things with different vendor CPUs, as if you only testing on like you want here on one model by disabling cores you might be but could be also that you will never spot this amdgpu special issue which does not happen on nvidia driver

    Imagine average Joe, who currently have 2 core Intel and wanna buy 4 core AMD and to use amdgpu - from that POV things does not look good isn't it? He will improve performance with nvidia, but with amdgpu things will go worse - if that is not obvious

    The amdgpu+mesa stack as tested (not sure if "as-tested" enabled threading) can not take much advantage of the Ryzen's additional cores, and so the Pentium's higher clocks end up having more impact on frame rate than the additional cores.
    But nVidia driver takes advantage of that, just look at Openarena as example In comparison to 2 cores where results looks the same, you go down on 4 cores, but nvidia continue to scale up. Further look at Bioshock, Counter Strike and so on, same thing happens.

    To note again that glthread is enabled for Bioshock by default in mesa's drirc, but no that does not matter as same thing happens
    Last edited by dungeon; 24 January 2018, 11:44 PM.

    Leave a comment:

Working...
X