Announcement

Collapse
No announcement yet.

Core i3 vs. Core i5 Performance Impact On OpenGL/Vulkan Linux Gaming

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by gamerk2 View Post

    Not quite true. While the i3 can often keep up in terms of FPS, it fares significantly worse in terms of latency. That's where those extra cores in the i5 make a huge difference.
    how in the world can you even make this statement without min/max frametime and minimum fps?

    Comment


    • #12
      Originally posted by nomadewolf View Post
      Just my 2 cents:
      On intel processores from i3 (including) up, what really counts are the cores.
      Bottom line: if you're unsure about two processors with the same number of cores and same technology, of course, just get the cheapest.
      if cores arent that important look at single core performance tests. But nowadays most software should be able to use multiple cores without too much latency. Even opengl games.

      Comment


      • #13
        Originally posted by Michael View Post

        Yes of course follow-up comparison will have NVIDIA results, these were just some results for those curious about Vulkan and RadeonSI/RADV in particular. Doing more tests when my 7700K arrives and I also have Celeron/Pentium Kaby coming for more entertainment.
        Great! Some of those Celeron/Pentium chips actually look interesting as a replacement for my long outdated Netbook. See the discussion about Arch dropping 32-bit for that...

        Comment


        • #14
          Originally posted by chuckula View Post
          Hell, not even bothering to leave the confines of this article, that Furry is literally losing to the far less powerful Rx 480, and the Rx 480 barely manages to beat the far less powerful Rx 460.
          At the risk of being called a koolaid drinking shill, you will notice that the tests were run at lowest resolution. Most were run at 800x600 where all the GPUs were fast enough that driver/CPU limits dominated. That was the purpose of the tests - to identify changes from driver/CPU limits - and if Fury/480/460 had shown significantly different results then Michael would have been doing the tests wrong.

          In the tests with higher resolution and/or more demanding games you can see the 460 start to fall back... and if resolution had gone higher you would have seen the 480 fall back from the Fury in most cases as well (although 480 stays faster in geometry-limited games).

          Originally posted by chuckula View Post
          As for the Rx 480, it's a power hog that's supposed to be 40 - 50% faster than the GTX-1060 on paper. Doesn't work out that well in the real-world though.
          That's because the "on paper" spec you are probably thinking about (raw flops) is usually calculated at base clock rather than actual clocks. If you calculate at actual clocks the specs tend to be a lot closer (maybe 5.9 TF for 480 vs 5.0 TF for 1060). There is also the obvious point that raw flops is only one factor in overall performance - depending on the geometry vs compute mix in each application you see differing results, with the 480 often running ahead of the 1060 on newer DX & Vulkan apps.
          Test signature

          Comment


          • #15
            Disclaimer: Just wanted to refute this part of your post.

            Originally posted by chuckula View Post

            As for the Rx 480, it's a power hog that's supposed to be 40 - 50% faster than the GTX-1060 on paper. Doesn't work out that well in the real-world though.

            * And by "hot", I'm talking about the almost irresponsible power consumption levels for a rather pedestrian level of performance.
            I believe the GTX 1060 is similar to the GTX 980 in terms of performance. Under typical gaming loads, the 1060 consumes ~115 watts and the 980 consumes 155 watts, i.e., similar performance for 74% power consumption of previous generation card (16 nm vs 28 nm). On the AMD camp, the RX 480 is similar in performance to the R9 290X/390. Under typical gaming loads, the 480 consumes ~165 watts and the 290X/390 consumes ~260 watts, i.e., similar performance at 64% power consumed by previous generation card (14 nm vs 28 nm). I'm not sure if it's the process node difference between nVidia and AMD that is at play here. I am guessing that AMD has actually made more perf/watt optimizations with the 480, than nVidia has with the 1060. Correct me if I'm wrong!

            Comment


            • #16
              Originally posted by cj.wijtmans View Post

              how in the world can you even make this statement without min/max frametime and minimum fps?
              Because we've benchmarked it heavily in the Windows world. i3's suffer from far more latency related issues then i5's, and we have many examples of i3's pushing out an average of 50 FPS, yet being near unplayable due to latency related stutter. FPS isn't the be all end all, what matters is the ability to push a frame to the monitor every 16.67ms.

              Comment


              • #17
                Originally posted by Xicronic View Post
                Michael, do Kaby Lake chips significantly improve Radeon gaming performance relative to older generations like Sandy and Ivy Bridge?
                Not Michael but on most games, no. Some games, yes. And this is more true if you're upgrading to a *lake processor that also has a higher clockspeed and/or more cores, because then you get more than just the architecture improvement itself. But if you're not getting slowdowns with current games, my personal advice is hold the upgrade off as long as possible, either for AMD Ryzen or Intel Coffee Lake or Icelake or even beyond if you're still running okay at that time.

                Originally posted by article page 3
                Do note that the i3-7100 has a higher base clock frequency than the i5-7600K (3.9 vs. 3.8 GHz) while the i5-7600K can boost up to 4.2GHz. So if the i5-7600K isn't boosting in this particular case, that could explain why the i3-7100 comes out ahead.
                That only would explain a ~2.6% lead for the 7100, but the 7100 wins by up to ~43% in some of the tests on that page. It may still be a case of power management - maybe the 7600k is actually not only not boosting, but not even running at base. I can only imagine that being the case if A) it's running too hot, or B) friggin power management kernel drivers are messing up yet again.
                Last edited by Holograph; 25 January 2017, 12:42 PM.

                Comment


                • #18
                  Originally posted by chuckula View Post
                  Have you bothered to look at ANY of the benchmarks that have been posted on here for the last year or so? AMD has major performance problems under Linux. MAJOR.
                  And yet it still successfully painted a picture for these CPU tests. So what's your point? Everyone knows that if the frame rate was limited to 60FPS that both CPUs would be good enough.
                  That Furry that you are so hot for* has the same theoretical computer power as a GTX-1070 but loses by a large margin in practically every benchmark that's ever been run here.
                  It's not even close.
                  And again - this is a benchmark for CPU tests. It doesn't matter if the 1070 was used. Also, I wouldn't want a Fury - if you gave me money to buy one, I'd buy something else.

                  So here's the source of the "shill" in my koolade - you are notoriously biased toward Nvidia. In most articles involving AMD hardware that you have posted in, you have said something either anti-AMD or pro-Nvidia. Nvidia has great products, I'm not denying that. I own Nvidia products myself. I'm aware that the 1070 is practically better than the Fury in almost every way. But you promote Nvidia to the point where it's not even necessary. It just gets irritating. The very things you complain are uninteresting about this article would be made less interesting if Michael did things your way.

                  I'm fine with people requesting Nvidia tests, but it's your request and everything it stands for that I don't like.
                  Last edited by schmidtbag; 25 January 2017, 01:05 PM.

                  Comment


                  • #19
                    Now put a 70$ overclocked Pentium and you win all those games benchmark!

                    Comment


                    • #20
                      Originally posted by gamerk2 View Post

                      Because we've benchmarked it heavily in the Windows world. i3's suffer from far more latency related issues then i5's, and we have many examples of i3's pushing out an average of 50 FPS, yet being near unplayable due to latency related stutter. FPS isn't the be all end all, what matters is the ability to push a frame to the monitor every 16.67ms.
                      My thoughts exactly. Average FPS is an approximate indicator of performance and doesn't tell the whole story.

                      Comment

                      Working...
                      X