Announcement

Collapse
No announcement yet.

AMD Ryzen 7 1800X vs. Intel Core i7 7700K Linux Gaming Performance

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Ryzen 7 1800X vs. Intel Core i7 7700K Linux Gaming Performance

    Phoronix: AMD Ryzen 7 1800X vs. Intel Core i7 7700K Linux Gaming Performance

    For those craving some Linux gaming benchmarks from the newly-released AMD Ryzen 7 1800X processor, here are some test results. In this initial comparison are benchmarks of the Ryzen 7 1800K to Core i7 7700K when running these processors at stock speeds while using a Radeon R9 Fury graphics card paired with AMDGPU+RadeonSI for the Linux graphics driver stack.

    http://www.phoronix.com/vr.php?view=24225

  • nuetzel
    replied
    Originally posted by droste View Post

    If you want to try, you can boot with "pcie_aspm.policy=performance" as kernel parameter, this should give you the same result.

    You can check with:
    Code:
    cat /sys/module/pcie_aspm/parameters/policy
    It should now say "[default] performance powersave", with the parameter it should say "default [performance] powersave".
    If there's no performance difference, than you're not running into this problem
    But _only_ if your (distro) kernel support, that.
    E.g. offer the 'missing' modules. --- openSUSE Kernel:stable (Tumbleweed) do NOT.

    /opt/mesa> grep ASPM /boot/config-4.10.1-5.gf764d42-default
    CONFIG_PCIEASPM=y
    # CONFIG_PCIEASPM_DEBUG is not set
    CONFIG_PCIEASPM_DEFAULT=y
    # CONFIG_PCIEASPM_POWERSAVE is not set
    # CONFIG_PCIEASPM_PERFORMANCE is not set

    Some numbers would be nice!

    Greetings,
    Dieter

    Leave a comment:


  • hoohoo
    replied
    Originally posted by gigaplex View Post

    That's expected. High resolutions moves the bottleneck to the GPU rather than the CPU.
    For the video cards - Fury vs GTX yes that makes sense. Why should it be the case when the independent variable is the video card and the dependent variable is CPU?

    Leave a comment:


  • dimko
    replied
    Originally posted by debianxfce View Post

    When I can see this clearly with my X4 845, with Ryzen you should see more clearly. Linux performance governor is hindering the speed of my custom kernel, even when the performance governor is used.
    What exactly do you see, please elaborate. Did you manage to squeeze some performance from your machine by not using governor? I thought it's there by default and every Linux machine MUST use it?

    Leave a comment:


  • indepe
    replied
    Originally posted by droste View Post

    If you want to try, you can boot with "pcie_aspm.policy=performance" as kernel parameter, this should give you the same result.

    You can check with:
    Code:
    cat /sys/module/pcie_aspm/parameters/policy
    It should now say "[default] performance powersave", with the parameter it should say "default [performance] powersave".
    If there's no performance difference, than you're not running into this problem
    Thanks, sounds good, will do.

    Leave a comment:


  • Tomin
    replied
    Originally posted by sarfarazahmad View Post
    I am thinking amd64 based SBCs with pluggable ram, pluggable processors, m.2 bus for SSDs, pluggable wifi/NIC cards. (Maybe I have gone too far )
    I don't think that is a SBC anymore.

    Erm, should I say an SBC (es bii see or whatever you'd use) or a SBC (single board computer)?
    Last edited by Tomin; 03-04-2017, 10:11 AM. Reason: Just wondering...

    Leave a comment:


  • droste
    replied
    Originally posted by indepe View Post

    Is it possible to translate this into english? Do I need to worry about running into the same problem? (Once I have a system able to run Vulkan without recompiling the kernel, that is, which will be soon.)
    If you want to try, you can boot with "pcie_aspm.policy=performance" as kernel parameter, this should give you the same result.

    You can check with:
    Code:
    cat /sys/module/pcie_aspm/parameters/policy
    It should now say "[default] performance powersave", with the parameter it should say "default [performance] powersave".
    If there's no performance difference, than you're not running into this problem

    Leave a comment:


  • indepe
    replied
    Originally posted by debianxfce View Post
    You can not tune windows desktop, but Linux desktops you can. Use a non debug 300Hz kernel, disable cpu freq governor etc. If Phoronix needs games with threads, benchmark games with wine-staging, that will distribute load to cpu cores.
    What kind of performance improvement are you measuring when doing this? Or is your suggestion specifically for Ryzen, and you don't have one (yet) ?

    Leave a comment:


  • zboson
    replied
    Originally posted by bakgwailo View Post
    Apparently AMD has suggested to disable SMT for gaming benchmarks - seems to have made a marked difference for some games on the Windows benchmarks I have seen. It would also be cool to see how it compares to the FX-8370 like the previous article
    http://www.anandtech.com/show/11170/...0x-and-1700/10

    "However, when running in SMT mode but only with a single thread, the statically partitioned parts of the core can end up as a bottleneck, as they are idle half the time."

    That's an interesting point. There are cases when SMT in not helpful. For example in large dense matrix multiplication. If can take full advantage of a core with a single thread and using another thread will be less efficient. At least that is true with Hyper-Threading. So there are cases where you only want one thread per core. I wonder then if Zen with SMT enable is less efficient with one thread per core then it is with SMT disabled.

    It's even more complicated on Knights Landing because it supports four threads per core. Some suggest to turn off Hyper-Threading on Knights Landing. I don't have enough experience yet with it know what is best. It of course depends on the application. Here is an interesting comment about four threads per core by the "father" of Xeon Phi
    http://www.agner.org/optimize/blog/read.php?i=761#763

    Leave a comment:


  • agaman
    replied
    Why only radv is afected by this? Can you see any improvement in OpenGL after this change?
    It's radeonsi doing something different to change the PCI performance mode that radv isn't doing?

    Leave a comment:

Working...
X