Announcement

Collapse
No announcement yet.

Core i7 7700K vs. Ryzen 7 1800X With Ubuntu 17.04 + Linux 4.12

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by mlau View Post

    Performance per core is more or less identical on Zen and corei7; corei7 just clocks much much higher when only one or two cores are taxed. And that's what you see with almost all games: intels clockspeed advantage of almost 1ghz. I've never seen my 1800x go beyond 3.6ghz in linux, even when only one core is taxed, so the 4.4/4.5 ghz on kaby make it look better.
    You can overclock your 1800x to 3.9 Ghz if you have a decent motherboard. 3.9 GHz should be doable with 1700/1700x/1800x, maybe even 4.0 Ghz.

    Comment


    • #42
      Originally posted by fuzz View Post

      I'm not the one saying I'm disappointed, mostly just pointing out to others that there is no reason to be disappointed in Ryzen. Or did you misquote?
      "CPU with higher clock speed goes faster." <-- not true, depends on the task. All Intel server CPUs with 12+ cores have much lower clock speed and they 'go faster' in those tasks they were designed for. Game FPS isn't the only metric. Throughput is much better metric for some useful tasks. Besides, with high throughput, you can often even finish tasks faster for example in web servers. It just depends on how you define tasks.

      Comment


      • #43
        Originally posted by wizard69 View Post

        Only some of them. Considering how threaded software is becoming more common and my needs Ryzen still looks like the ideal solution.
        This prediction appeared 13 years ago http://www.gotw.ca/publications/concurrency-ddj.htm

        Intel announced i7 3770k in 2012. This year the same CPU market segment provides 7700k. How much perf did we gain in the last 5 years? Not much. I recall that the current perf increase in i7 CPUs is around 4% per year. According to this (http://preshing.com/20120208/a-look-...u-performance/) it used to be ~20% per year and before the multicore boom around 50-60% per year. So the single core speedups is a dead end. People should read about the law of diminishing returns. Unless we switch to other materials or tech (light?) in manufacturing, nothing can be done.

        How multiple cores helps us by scaling a lot better. Moore's law is still alive and kicking. Processes become smaller and we can cram 2x as many transistors every 18-24 months at least for few years. 2x transistor count equals to twice as many cores. Assume multithreaded code scales by 50-90% with 100% more cores. We already scale 10x better with multiple cores than fine tuning the IPC and clock speed.

        The game developers live in false beliefs if they think that things stay this way forever. There's no such law or fact that game code won't scale. Games are the most passionate users of multiple cores on the GPUs. Games scale to 4000 CUDA cores just fine. If you can't fit more cores in a chip, you move to multiple sockets. You can already fit 32 Zen cores in a chip and 4 sockets on an EATX board. That's 256 threads. It doesn't really matter if i7 7700k core is 50% faster than Zen core. Most tasks are malleable. Intel needs to overclock to 64 GHz to beat a 256 thread AMD system. If you think that the game code won't scale, give me an example and I'll tell how to parallelize it.

        Comment


        • #44
          Originally posted by Sidicas View Post
          So many promises but the fact remains that games nowadays require single thread performance just as much as 20 years ago.
          The secret to bad game perf scaling is that designing games both for 4+ cores and 1-2 cores is more expensive and good scaling with 4+ cores equals to few percent perf loss with 1-2 cores. Why? if you just do stuff sequentially, you do one task, then another. You spend 99.99% time doing tasks and not thinking about them. Scaling to multicore is like running a 100 000 person company. You need to manage stuff. Managing takes time. It can take 1% or 10%. You lose 10% perf per person, but you get one million percent speedup. Game developers want to maximize the size of their paying audience. They want money from the rich and poor. The problem is, slow legacy computers perform worse and new machines are more than good enough for their games. So, why would you optimize for people who already get decent performance? Your market won't increase that way. Instead your time to market grows, your expenses grow, you get very minimal bonus in terms of target audience size. Gamers upgrade machines because they want to play new games. The performance is either acceptable or not. Can you spot a difference between 500 FPS and 600 FPS? The case with useful software is, more work gets done with faster machines. Larger throughput means more work done in less time. In games this isn't the case. You don't want to minimize time spent on games, you want to maximize it. It's a totally different market. Nevertheless, I still think it would be nice if game developers switched to properly threaded designs. This would help progressing the CPU designs also in other fields.

          Comment


          • #45
            Originally posted by efikkan View Post
            Ryzen is a better superscalar, so there is indeed some theoretical potential. In theory, it should handle 33% or more operations per clock for mixed int and float, provided the prefetcher is keeping all those ALUs and FPUs fed of course. And that really is the fault with Ryzen this far; except for AVX, it does have more computational resources than Intel, but Intel is much better at saturating their resources.

            There are two ways of solving this;
            1) Make all performance critical software cache friendly. In most cases this would require complete rewrites of applications so it's not going to happen anytime soon. And considering that the trend in development is still adding more bloat, it doesn't seem like this is going to improve overall. Making software cache optimized will make it scale well on both vendors, but it will help AMD even more.
            2) Make a better prefetcher for the CPU. If AMD had a prefetcher just as good as Intel, Ryzen would scale much better (per core) in any application which is not AVX-dominant.
            I imagine better compilers could also help with the cache optimization. From what it seems like, it isn't so much that the prefetcher isn't as good as Intel's, it just isn't capable of the throughput to feed the beast. I have the strong feeling that the prefetch is going to have higher throughput in Zen2, definitely the lower hanging fruit as of right now. If we could get the full width of Ryzen's performance (that 33%) we would see a generational leap in IPC similar to how Ryzen performed in Z-Bench before they "fixed" it. That being said, without programs with high ILP the IPC isn't really going to matter which is going back to compiler optimization. I'm going to make the crazy guess that the wideness of Ryzen is part of why it doesn't clock as high as Intel so hopefully, if the higher throughput can be utilized, Zen2 will be much faster. Lastly, I think they have made a good choice by limiting the amount of AVX hardware in Zen. Uses a lot of die space and generates a lot of heat when activated.

            Comment


            • #46
              Originally posted by VikingGe View Post
              What does surprise though is by how much Kaby wins here, and how small the difference is even in fully multi-threaded workloads like kernel compilation. When Ryzen and the first benchmarks came out I honestly thought they were really good CPUs, but the more benchmarks I see, even the non-gaming ones, the more I have to change my opinion, to the point where I'm inclined to say that, maybe with the exception of the cheap six-core parts if you need i7-levels of multi-threaded performance, they aren't worth considering under any circumstances.

              And yeah, in gaming they are getting wrecked so hard it's not even funny. Still unsure when I what I should replace my old 1090T with - an i5 isn't an option because they are hardly any faster at compiling than my current chip (the 7400 is actually slower), an i7 is not an option because it's simply too expensive, and Ryzen 5 is not an option either because... well, they are terrible at anything other than compiling.
              Wait. You want your compiling to be faster? Moar cores, ram, and fast storage!!!!!
              Ryzen is a really good chip, but that numa factor is pretty bad. If you can keep your processes from hopping complexes perf ipc should be quite competitive.

              Comment


              • #47
                I won't change my PC right now (lack of cash, and my i5 4670K is holding up quite well @4.2 GHz), but if I had to build a new one now, I wouldn't get the 1800X - the 1700 is much cheaper (and overclocks quite well, moreover it doesn't have the 20°C temp offset). I'm quite tempted by the 1600 (with X or not), which mixes good IPC, lots of cores/threads, reasonable price and nice frequencies.

                Comment


                • #48
                  Originally posted by caligula View Post
                  It doesn't really matter if i7 7700k core is 50% faster than Zen core. Most tasks are malleable. Intel needs to overclock to 64 GHz to beat a 256 thread AMD system. If you think that the game code won't scale, give me an example and I'll tell how to parallelize it.
                  i7 7700k, Apache Benchmark v2.4.7, 50% faster than 1800X -> "100 requests being carried out concurrently" = does not really scale, or broken ?!

                  Comment


                  • #49
                    Originally posted by caligula View Post

                    This prediction appeared 13 years ago http://www.gotw.ca/publications/concurrency-ddj.htm

                    Intel announced i7 3770k in 2012. This year the same CPU market segment provides 7700k. How much perf did we gain in the last 5 years? Not much. I recall that the current perf increase in i7 CPUs is around 4% per year. According to this (http://preshing.com/20120208/a-look-...u-performance/) it used to be ~20% per year and before the multicore boom around 50-60% per year. So the single core speedups is a dead end. People should read about the law of diminishing returns. Unless we switch to other materials or tech (light?) in manufacturing, nothing can be done.

                    How multiple cores helps us by scaling a lot better. Moore's law is still alive and kicking. Processes become smaller and we can cram 2x as many transistors every 18-24 months at least for few years. 2x transistor count equals to twice as many cores. Assume multithreaded code scales by 50-90% with 100% more cores. We already scale 10x better with multiple cores than fine tuning the IPC and clock speed.

                    The game developers live in false beliefs if they think that things stay this way forever. There's no such law or fact that game code won't scale. Games are the most passionate users of multiple cores on the GPUs. Games scale to 4000 CUDA cores just fine. If you can't fit more cores in a chip, you move to multiple sockets. You can already fit 32 Zen cores in a chip and 4 sockets on an EATX board. That's 256 threads. It doesn't really matter if i7 7700k core is 50% faster than Zen core. Most tasks are malleable. Intel needs to overclock to 64 GHz to beat a 256 thread AMD system. If you think that the game code won't scale, give me an example and I'll tell how to parallelize it.
                    Except that it doesn't. And that's where your whole construct collapses.
                    It puzzles me to see people thinking multithreading is the new kid on the block that will save puppies from dying one we figure out how to properly put it to good use. The reality is we've had (super)computers that would run hundreds of threads for decades. We've written code for them. But the simple fact is not all problems are infinitely parallelizeable and even when you find tasks that are, you're hit by other problems that impact multithreading (e.g. memory coherence - look what happens the your data resides in another CCX's cache).

                    My recommendation is stop trying to predict the future. Buy what works for you now and let people in the know handle the advancements. When/if advancements come, upgrade and enjoy. When/if advancements don't come, enjoy your current rig

                    This whole multicore craze reminds me of the early 2000s when people where convinced that since they suddenly started thinking about EVs, by 2005 everyone will be driving one.
                    Last edited by bug77; 19 May 2017, 06:31 AM.

                    Comment


                    • #50
                      I did not expect this type of mixed bag results... Okay maybe for gaming, but look at c-ray test vs static serve, something just does not look right. Can anyone compare results to similar tests done on Windows?

                      Comment

                      Working...
                      X