Announcement

Collapse
No announcement yet.

Ryzen 7 2700 / Ryzen 7 2700X / Core i7 8700K Linux Gaming Performance With RX Vega 64, GTX 1080 Ti

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Jabberwocky View Post

    Sorry for taking it out of context, but stock is not what stock was a year or two ago. With coffee lake and zen+ (depending on what motherboard you have) the CPU will overclock automatically past stock speed as long as you have proper cooling.
    Sure - Stock though in this context is "default behaviour" so, if the 2700X will overclock itself a little, as long as there is thermal overhead and therefore better cooling solutions do better on the auto overclock. AMD also have a better thermal solution compared to Intel though... You could go down that rabbit hole for days!

    I guess a better title would be "As Shipped".

    Comment


    • #32
      Originally posted by humbug View Post
      something is wrong with how people are porting games to Linux. On windows it seems that modern games are more multithreaded which allows Ryzen to keep up.

      Something happens when porting and it becomes more reliant on single threaded performance. When you use 4K which is a more GPU than CPU bottlenecked scenario the difference goes away.

      I dunno maybe it's the same reason every game takes a 20% performance hit when porting, even on Intel CPUs.
      That is an interesting point. Thank you!

      Comment


      • #33
        Originally posted by abott View Post
        Wonder what is making the Ryzen chips so abysmal.
        Single thread performance, what else Games likes that sthread perf to provide higher fps. Look at passmark results for example:

        https://www.cpubenchmark.net/cpu.php....70GHz&id=3098
        https://www.cpubenchmark.net/cpu.php...+2700X&id=3238

        See there single Thread Rating: 2708 vs 2203

        That is ~19% diff on average we are talking about Is that easy to understand?

        Now, 1) about half of that single threaded performance difference is simply because of higher clock (diff between 4.3GHz and 4.7GHz is ~9%) and 2) another half is difference that even AMD don't deny

        So what might be improved? Worse case scenarios maybe (with a point on maybe), by at least something in range of 10% maybe as here worse case is 28% slower and expected is in range of 19% on average

        Perf is kind of resource managment. So, even user can improve something by just OC it, here on Linux user might also see if governor doing something weird so shortages are coming from there, also that new IBPB spectre mitigation for AMD might also eat something, etc... and many many other more technical reasons might be a reason for particular boundware, basically anything that eat some resources in this bloatwared world

        On the other hand, if app is properly multithreaded or value more cores (and now compare mthread rating numbers) there is no what to worry about AMD CPU is already there and about 6% faster on average
        Last edited by dungeon; 22 May 2018, 03:40 AM.

        Comment


        • #34
          Originally posted by debianxfce View Post
          400Mhz faster clock in intel cpu is unfair
          Even just powering on intel cpu is unfair, it will surely destroy competition then! That's amd's fault that ryzen cannot clock as high.
          Last edited by xpue; 22 May 2018, 04:49 AM.

          Comment


          • #35
            Michael

            Typo on page 2, last sentence: "Or at 4K, the RX Vega 64 performance was the same across the three tested GPUs while the GTX 1080 Ti continued scaling just fine."

            Comment


            • #36
              Oh wow, poor Ryzen.
              Was quite sure on the day Intel 8th gen came out that these were the CPUs to buy when it comes to gaming.
              Michael, would you care to add an article with the same results, but with either Core i3 8350K or Core i3 8100 included, so that we can see if these drops in performance are more of an AMD issue or whether linux gaming indeed takes so much CPU power.

              Comment


              • #37
                Sorry for the long post, I am unaware of any spoiler/hide functionality on the forum.

                Originally posted by dungeon View Post

                Single thread performance, what else Games likes that sthread perf to provide higher fps. Look at passmark results for example:

                https://www.cpubenchmark.net/cpu.php....70GHz&id=3098
                https://www.cpubenchmark.net/cpu.php...+2700X&id=3238

                See there single Thread Rating: 2708 vs 2203

                That is ~19% diff on average we are talking about Is that easy to understand?

                Now, 1) about half of that single threaded performance difference is simply because of higher clock (diff between 4.3GHz and 4.7GHz is ~9%) and 2) another half is difference that even AMD don't deny

                So what might be improved? Worse case scenarios maybe (with a point on maybe), by at least something in range of 10% maybe as here worse case is 28% slower and expected is in range of 19% on average

                Perf is kind of resource managment. So, even user can improve something by just OC it, here on Linux user might also see if governor doing something weird so shortages are coming from there, also that new IBPB spectre mitigation for AMD might also eat something, etc... and many many other more technical reasons might be a reason for particular boundware, basically anything that eat some resources in this bloatwared world

                On the other hand, if app is properly multithreaded or value more cores (and now compare mthread rating numbers) there is no what to worry about AMD CPU is already there and about 6% faster on average
                Garbage in, garbage out.

                Both single- and multi-threaded performance varies on workload. Intel does not have 19% better single threaded performance on average because it's different depending on the workload. Here are some single-threaded examples of different (you guessed it) workloads.


                Guru3D's CPU-Z results:

                Anandtech's Cinebench 15:


                Tom's Hardware's SiSoft Sandra AES Test



                Here are some multi-threaded workloads.

                Guru3D's CPU-Z results:

                Anandtech's Cinebench 15:


                Tom's Hardware's SiSoft Sandra SHA Test


                Guru3D's 3Dmark CPU Test:


                It's easy to cherry pick examples and only show what supports your "understanding" of the situation. It is not so easy to investigate and report on why applications behave differently. Is it CPU cache, memory latency, NUMA, or missing/extra instructions? etc

                Anandtech found a problem with Ryzen vs Coffee Lake tests and wrote an article about their findings. A short quote of the conclusion for those that do not have time to read the 5 page research results:

                Why This Matters, and How AnandTech is Set to Test in the Future

                The interesting thing to come out of both Intel and AMD is that they seem to not worry if HPET is enabled or not. Regardless of the advice in the past, both companies seem to be satisfied when HPET is enabled in the BIOS and irreverent when HPET is forced in the OS. For AMD, the result change was slight but notable. For Intel we saw substantial drops in performance when HPET was the forced timer, and removing that lock improved performance.

                It would be odd to hear if Intel are not seeing these results internally, and I would expect them to be including an explicit mention of ensuring the HPET settings in the operating system in every testing guide. Or Intel's thinking could be that because HPET not being forced is the default OS position, they might not see it as a setting they need to explicitly mention in their reviewer guides. Unfortunately, this opens up possibilities when it comes to overclocking software interfering with how the timers are being used.
                A opposing view on the issue (mentioned in Anandtech's article) is overclockers.at 's reponse on the matter: "Let's rip the bandaid off: Yes, there IS an HPET bug and no, Intel is NOT cheating. The contrary is the case, Intel suffers from this bug and we are pretty sure that Anandtech is not the only review site that has published wrong results because of these issues. Evidently there are lots of people out there having low framerates on their setups and no explanation for it."

                My conclusion: Intel does not win in all single threaded tests and AMD does not win in all multi-threaded tests. Intel has a bug in HPET timing which is problematic for accurate time measurement (it could go both ways). Linux game benchmarks seem to favour Intel for reasons unknown. Humbug (see below) suggested that it could be related to the developers who ported the games in question. It does not make sense to me, if you consider the massive difference in performance between Intel and Ryzen. The single threaded performance should not be that big in any type of workload let alone gaming. My money is on system configuration (motherboard/RAM/Kernel related). I would like to do some tests myself. I am getting a X470 motherboard later this week, unfortunitly the only zen+ CPU that I have is a 2200G. I'll play around on my 1800X in the meanwhile.

                Originally posted by humbug View Post
                something is wrong with how people are porting games to Linux. On windows it seems that modern games are more multithreaded which allows Ryzen to keep up.

                Something happens when porting and it becomes more reliant on single threaded performance. When you use 4K which is a more GPU than CPU bottlenecked scenario the difference goes away.

                I dunno maybe it's the same reason every game takes a 20% performance hit when porting, even on Intel CPUs.
                PS: Michael is there anyway we can get more hardware info on PTS? Memory info is quite limited at the moment, something like lshw or script that parses lshw could be useful?

                Comment


                • #38
                  Originally posted by vegabook View Post
                  Funny though how good ol' RX480/580s completely spank Nvidia when it comes to "raw" compute such as mining. It takes a 1080ti get faster compute than an RX580.... and the Vega64 just sails right past even that.
                  Miners use some driver patches to increase hash rate, I was setting some rigs a month ago or so, 580 did about 29-30 MH/s, but from what i was reading online people got anywhere in between 22 to 25 MH/s back in mid 2017 (probably not pached and OC'd). I don't know if there are those driver paches for nvidia GPU's, I don't even know if they are official or not on the AMD side (didn't care to look up). So..., as you can see there is "uplift" in "up to" 8 MH/s with combination of drivers, freq., and paches, and that goes to about 1/4 or more in terms of performance. Aside from looking cool to have 10+ GPU's in a raw, it's a pitty such hardware is wasted on blockchain.

                  Jabberwocky That's why I suggested to test Portal on low settings, since I doubt that in both cases GPU does hit 100% usage even on maximum settings (or high, whatever the settings were used in this test here), yet difference between CPU's is close to what is expected with Portal hitting arround 1200 FPS on both systems, with Intel having slight advantage, would be fun to see difference on low settings for multiple reasons.
                  Last edited by leipero; 22 May 2018, 06:01 PM.

                  Comment


                  • #39
                    Originally posted by leipero View Post
                    Miners use some driver patches to increase hash rate, I was setting some rigs a month ago or so, 580 did about 29-30 MH/s, but from what i was reading online people got anywhere in between 22 to 25 MH/s back in mid 2017 (probably not pached and OC'd). I don't know if there are those driver paches for nvidia GPU's, I don't even know if they are official or not on the AMD side (didn't care to look up).
                    The patches weren't really mining specific, other than being lumped together in a release as a consequence of some investigation into mining performance. Most of them were general optimizations that were applicable to any large-memory-footprint workload (eg large page support to minimize TLB thrashing).
                    Test signature

                    Comment


                    • #40
                      qarium I can speak only for low-end motherboards, in the past (Athlon XP era) MSI motherboards kicked some serious a**, they were great, MSI Neo etc., ASUS was great also. However, in era of buldozer, AM3+ MSI motherboards had terrible VRM's, affecting both stability and PC usage in general, but, no matter how bad it was in that segment (lower-end, even mainstream), they were far better than Gigabyte (that sucked even in K7/Athlon XP era, even at high end). ASUS on the other hand have really good track record in my experience, those motherboards usually worked the best, MSI had problems with lower quality parts in the past, Gigabyte with stuttering and lag, while ASUS kept all that in check, and had some periodic problems with BIOS (asus k7v8x with via chipset, funny enough AMD competition nvidia chipset did not have those BIOS problems asus k7n8x with nForce 2 ultra 400) and usual amount of RMA, probably higher than Gigabyte, but it was worth it in my opinion. But, as far as I know, all motherboard manufacturers did spend very little on AMD systems in order to cut price in the era of FX, and even back in Athlon XP+ era they did less to what they would do for Intel, in my opinion, harming AMD brand (I remmeber Athlon XP 2000+ working like total cr*p on Gigabyte motehrboard, stutter, lag all sorts of nonsense, while warking perfectly on mine AsRock motehrboard with via chipset, and latter on Abit with nForce chipset, honestly even tho it was faster with RAM etc., VIA did work "smoother" if that makes any sense, or better said AsRock since ASUS with nForce worked better than Abit).

                      Not sure what AMD can do about it, but that is long-term problem already with motherboard manufacturers, and people claim it got quite better since "Zen", but I can't speak about things I do not know.

                      bridgman Oh, I see, didn't really investigated, just did what I was asked to do.

                      Comment

                      Working...
                      X