Announcement

Collapse
No announcement yet.

CPUFreq Governor Tuning For Better AMD Ryzen Linux Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by debianxfce View Post

    Kabylike and other wintel religion stuff has been out for months and firmware and drivers are fixed. It is wintel religious to test new Amd hardware with partially implemented software.
    Again, could you stop this BS - there is no wintel religion there and you are just a troll ... some reviwers don't even wanna touch Ryzen gaming tests because of crazy comments like that

    Comment


    • #42
      Originally posted by debianxfce View Post
      You are wintel troll who is spamming wintel religion videos.
      Troll is you, such thing as "wintel religion" does not exist

      Michael, you should ban dungeon, but no because of this site is intel fan.
      Yes, Michael should ban me because you are accusing Michael that his site is something that does not exist

      Wintel religion goes this way: ms, canonical and redhat makes heavy bloatware and people are buying new expensive hardware to run that shit.
      What you are talking about, Microsoft even sponsoring your Debian... see there next DebConf17 in Canada, Silver sponsor Microsoft



      16 again Microsoft



      etc...

      So Debian is sponsored by Microsoft, sponsored by Valve also, but in the same time since early beggining sponsored by FSF and also shipped default with non blob firmware installation and run on utter non expensive RPi, among many other things - how is that even possible

      He, he, since it is in Canada mybe Bridgman should visit DebConf to see how it looks like when Microsoft, Valve and FSF representatives are on the same conference, but thing is not only about them
      Last edited by dungeon; 07 March 2017, 10:50 AM.

      Comment


      • #43
        The choice of 4K resolution for the benchmarks is a strange one as you want to make sure the tests are not GPU-bound to compare CPU governors… Also the conclusion of the article just repeats the stated goals of each governor, but does not reflect the actual benchmarks which show that powersave does not save power and that performance is not significantly more performant than ondemand or shedutil.

        Comment


        • #44
          Originally posted by dungeon View Post

          There is no pro-1080p there vs non pro , Joke aside, it can be any resolution with CPU bottleneck as when you want to compare CPUs on gaming you really wanna compare CPU bottlenecked cases, otherwise what is the point of comparing these CPUs for gaming since PlayStation could play games also , Gamers Nexus tested 1800X there and speak only about that there... always with price in mind also as that is the main point beside average performance. On that 1800X point, they concluded in compariosn to 7700K from a gaming POV, user should not pay more for less perf and i think that too As i see, now they have another one on 1700

          Yes yes, watched all those. Yep, but if that is the use case, then you should test something more like 480p with lowest possible graphics settings.
          This how ever does not negate other resolutions, they are only applicable to other use cases. 480p is applicable for the use case here. In theory, an Ryzen 1700 has more potential/horsepower than an same price i7 7700k. What 1080p (preferably even lower res for this use case) shows, is the level of optimization, while syntetics show more the potential of the cpu with future opimizations. As for 1080p, 1440p and 4k I think all these are bad for such an use case, but instead serve to show current performance in the real world and should be pared with eqvivalent hardware and settinc to show this. All these tests has their relevance, but people are claiming, hating and otherwise flaming left and right with-out truly understanding what and why the reviewers are actually showing.

          I agree that on Windows, a I7-7700k is better for 1080p gaming, it is roughly competitive to the ryzen in 4k. Now understanding all this, the results does not mean, like gamers nexus is claiming, that say 2 years down the road with new games and graphics hardware, that Ryzen will bottleneck more than an I7-7700k in *new games*, while it might in the then 2 year old games. So that said, if the 1080p numbers are in current games playable then what that proves is that they will be atleast as playable in that game in 2 years. Future games might become 8+ thread optimized, thus taking advantage of the extra horsepower, thus negating gamers nexus conclusion. This only holds true is game devs fail to optimize for more than 4 threads, then 1080p results will be the same between these processors in the future, meaning i7-7700k is a better buy.

          In the end, it boils down go what you individually believe the future holds, this is why you should take this debate with a pinch of salt, both camps are poliriced and claim shit left and right.

          Comment


          • #45
            Originally posted by sarfarazahmad View Post

            Ya that puts it in a bad light. maybe they thought their branch prediction SenseMI was smart enough to compensate for that? does that make sense ? its a little disappointing all that effort and a bad L3 cache decision kills it all.
            I suspect you're right, but the situation isn't dire. A smarter scheduler can fix some of this, and mitigate the worst cases so they occur less often.
            As i said, it will take some work. The cores certainly have tons of data collection points, so, that coupled with profiling of their worst offenders should help (keep in mind, Linux recently got some measure of policy control over the cache (on intel - https://lwn.net/Articles/694800/)). The issue will be how much of this cache control is firmware (or even policy implemented in hardware!) only and if it can be made amenable to hints from the kernel.
            ​​​​​​​However, assuming none of that happens, it's still an awesome chip. The coming r3 should give us a much better idea of how well they'll do when that factor of 8 l3 cache penalty is no longer in play.

            Comment


            • #46
              Originally posted by sarfarazahmad View Post
              Thanks for detailed reply that cleared things up. Just one other question, how does Intel do it with Core i7 -6900k do it ?
              AFAIK Intel splits L3 cache up even more than we do - each core has a chunk of LLC, and a ring bus allows any core to access any chunk.

              Originally posted by liam View Post

              Ok! Well i was just about to update my last post because i found a site that has done the work for us.
              https://www.techpowerup.com/231268/a...cx-compromises
              L1 L2 L3 dram
              1800x 1.3 5.6-8.4 13.6-100 100
              i6900 1.3 4-7 15.8-18.8 70
              Edit: That doesn't really explain anything. First, Intel makes the best (fastest) cache in the world. Second, they don't seem to use a modular system like amd (the ccx).
              What id expect to see when amd release a quad core, single ccx soc is something that is much less idiosyncratic.
              So notice the huge variation in the 1800x l3 times. That's a much worse factor than i expected. Otoh, it gives us some idea about what to expect from this next gen fabric amd is using (i.e., 100ns best case latency).
              Probably worth waiting for an updated AIDA64 before drawing any conclusions:

              https://forums.aida64.com/topic/3768...en-processors/

              4) L2 cache and L3 cache scores indicate a lower performance than the peak performance of Ryzen. The scores AIDA64 measure are actually not incorrect, they just show the average performance of the L2 and L3 caches rather than the peak performance. It will of course be fixed soon.
              The obvious follow-on is "average with what workload ?". During normal operation discarded L2 entries from cores on each CCX will end up in the L3 partition associated with that CCX (ie the closest one), in the same way they do with Intel parts. What I don't know is the extent to which the current code was designed around the way Intel LLC operates and hence how much difference in results will be seen once Ryzen is factored into the code.

              Originally posted by dungeon View Post
              If we talk about today, that is what it is. If we talk about future, then - maybe. Who knows that really, so how can reviewer measure something with benchmark from future, that sounds impossible to do isn't it ? That is where Steve complained there, but anyway there was a video there on details (there are also interesting phone calls with AMD there ) anyways recommended to watch, blah, blah...
              Yeah, there are two conflicting views on this - one is the "performance at low res is the best predictor of performance you will get in the future, particularly if you upgrade your graphics card" and the other is "sure, assuming nothing else changes like games getting better at using multiple cores and your 2- or 4-core high clock CPU maxing out all its cores and becoming the bottleneck".

              Gamers Nexus is arguing in favour of the first view, Adored is arguing for the second (and there are a lot of people in between); both viewpoints are worth a watch. The supporting evidence for the second view is benchmarks where the cores on a 7700K are pretty much (but not quite) maxed out, and the argument is that this is already starting to become visible in the form of higher minimum frame rates than you get with Ryzen.
              Last edited by bridgman; 07 March 2017, 07:40 PM.
              Test signature

              Comment


              • #47
                Originally posted by dungeon View Post

                Nope, they don't... they tested 1440p also there, but to compare CPUs on gaming when you also introduce more GPU bottleneck makes CPUs more and more moot point to compare

                Also, reality is that 95% people currently steam uses 1080p monitors or less... yes or less, also there are much more people using less than 1080p than anything beyond that

                If we talk about today, that is what it is. If we talk about future, then - maybe. Who knows that really, so how can reviewer measure something with benchmark from future, that sounds impossible to do isn't it ? That is where Steve complained there, but anyway there was a video there on details (there are also interesting phone calls with AMD there ) anyways recommended to watch, blah, blah...
                Yes, people are using a 1080p monitor or less, but this is useless information without the context of the hardware they are using. You need to use a card with the performance of AT LEAST the GTX 1080 to start to see this bottleneck. People are using much, much less powerful videocards and even with year over year performance, we're talking 4 or more years down the road before that level of performance becomes mainstream. Also look at the core count of their processors, there is very little overlap with people going out to spend at least $330 on a CPU.

                This is with an admittedly buggy scheduler using software that isn't optimized on buggy motherboards. Even with pessimistic estimations those things will be optimized some time next year, requiring more graphics hardware to bottleneck the cpu. There is even massive differences in benchmarks as part of this fiasco is some people don't have buggy motherboards and are easily getting their RAM to 3ghz and beyond and are getting better performance.

                This is clearly a software issue because if it was a hardware issue it would be exposed in Cinebench. The performance is there and this isn't bulldozer.
                 

                Comment


                • #48
                  Originally posted by bridgman View Post
                  ...
                  Yeah, there are two conflicting views on this - one is the "performance at low res is the best predictor of performance you will get in the future, particularly if you upgrade your graphics card" and the other is "sure, assuming nothing else changes like games getting better at using multiple cores and your 2- or 4-core high clock CPU maxing out all its cores and becoming the bottleneck".

                  Gamers Nexus is arguing in favour of the first view, Adored is arguing for the second (and there are a lot of people in between); both viewpoints are worth a watch. The supporting evidence for the second view is benchmarks where the cores on a 7700K are pretty much (but not quite) maxed out, and the argument is that this is already starting to become visible in the form of higher minimum frame rates than you get with Ryzen.
                  ...
                  I don't quite understand what you are saying about "supporting evidence" and "the argument", maybe missed some youtube video's.

                  Obviously, with Ryzen, the price point for 8+ core CPUs has come down a lot. Are there real doubts that this trend will continue? I don't think so, even if some gaming experts seem to think of the Ryzen 7's 8 cores as if they were a temporary odd thing. But the price point for higher core counts will come further down, and they will become a cheaper solution also for lower performance levels.

                  With that development, sooner or later the gaming industry will find their way around bottlenecking at 4 or 5 cores, and that will be that. The only question here, I think, is at which point in time the gaming industry will move, if that will be soon enough to matter for someone who buys a CPU now, on the basis gaming considerations. I think it definitely could, but that's not decided yet.

                  Comment


                  • #49
                    Originally posted by indepe View Post
                    I don't quite understand what you are saying about "supporting evidence" and "the argument", maybe missed some youtube video's.
                    Check the Adored video that Geopirate linked above, starting around 13:05. Looks like the actual testing was done by Joker Productions and referenced in the Adored video.

                    The "argument" goes something like this:

                    Various : benchmarking at low resolutions is not a good predictor of future performance - at real world resolutions the GPU is already becoming a major bottleneck when you have a CPU in this performance class

                    Gamers Nexus : but what if you get a faster GPU in the future ? Won't the bottleneck shift back to the CPU again ?

                    Adored + Joker : that's not the only thing likely to happen in the future - you can see from <series of benchmarks comparing Bulldozer to Sandy Bridge over the years> that games are becoming more multi-core friendly over time, and some games are starting to max out the 4-core parts already (see <snippet from Joker videos>).

                    The notes above are very simplified - nobody actually took such a black and white position - I'm just trying to point out the key ideas being discussed. There are also still differences between benchmark results although those are starting to settle down now that we are getting more consistency in memory speeds and use of SMT.
                    Last edited by bridgman; 08 March 2017, 01:22 AM.
                    Test signature

                    Comment


                    • #50
                      Originally posted by MrCooper View Post
                      I switched my laptops from conservative/ondemand to schedutil when it became available in 4.7 and haven't looked back. IME it feels at least as snappy as ondemand, while keeping the CPU fan as quiet as conservative, because it adapts the clocks to load more quickly than either of them.
                      schedutil vs intel_pstate ?

                      Comment

                      Working...
                      X