Announcement

Collapse
No announcement yet.

Intel Core i5 11600K + Core i9 11900K Linux Performance Across ~400 Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    My 5800X is faster than the newest i9? Ahahahaha!
    Last edited by creative; 31 March 2021, 02:37 PM.

    Comment


    • #42
      Originally posted by coder View Post
      A lot of people game on them. Just on the lowest settings and maybe not with the latest games. If they were much less capable, it'd be bad news for a ton of people, right now, who cannot get or afford dGPUs.

      Another thing they're good for is GPU-compute. While not the order-of-magnitude improvement that dGPUs offer, iGPUs are at least comparable to the CPU cores and on a far smaller power budget. You can run intel_gpu_top and see for yourself, while running some GPU-compute benchmarks.

      Also, they're good at video encoding and transcoding acceleration. Again, not on the same level as Nvidia GPUs, but very well for what they are.
      In order:
      The alternative to an iGPU is either a dGPU or a chipset GPU--like Intel used to use. The former option would still be viable if intel hadn't used their iGPUs to kill off the market--people can't affort an iGPU/processor *and* a low end dGPU. If intel had stuck to dedicated CPUs w/chipset GPU or dGPU, that market would still be around. But since Intel *forced* the iGPU on their customers, they cut out the budget for the low end dGPU. Or chipset GPU. This was a market decision by Intel to pressure the low end of nVidia and AMD and it was for business reasons, not for technological ones. It's specifically a *horrible* technological solution as it forces the GPU and CPU to share not only a process node, but the very same die--leading to loss in yield. It also memory starves the GPU as it's stuck breathing through the little straw of a CPU's DRAM interface instead of the much wider and GPU appropriate GDDR bus. Either the chipset GPU or dGPU options would have been better in all technological counts than the huge iGPU. The only place where the iGPU can remotely make sense is for higly integrated solutions (ATOM type of SoC) and maybe in laptops, but Intel has put their *largest* iGPUs in their mobile platforms--which makes the least sense.

      There is no doubt that a dGPU would beat in iGPU in GPU compute. The improved memory performance alone would be sufficient to justify that statement. Keep in mind, I'm not saying an iGPU isn't useful, I'm saying that a much smaller (and greatly cheaper due to area and increased yield) CPU with a separate dGPU is a *better* solution.

      WRT encode and transcoding, that's a pretty unimportant niche. Given the poor quality of encoding that the hardware solutions provide, you would only use it for local network transcoding--say a PLEX server to another device on the same LAN. You would not use it for storage or remote transfer style transcoding. The quality/bitrate tradeoff of the hardware encoders is just too poor. Software encoding is much higher quality. Hardware transcoding isn't even a good choice for streaming given the poor quality/bitrate of hardware encoders.

      In summary, Intel lead the market down this path for their own business reasons and we're all suffering from it. At least AMD didn't make the same mistake and has provided us with the option to get away from Intel. And look how well that's done for Intel. They're fab limited making overly large chips that can't compete in price/performance or performance/power.
      Last edited by willmore; 31 March 2021, 03:29 PM.

      Comment


      • #43
        Originally posted by willmore View Post
        The alternative to an iGPU is either a dGPU or a chipset GPU--like Intel used to use.
        The reason they went inside the CPU package is so they could continue to share its memory, once the memory controller moved on die.

        Originally posted by willmore View Post
        It's specifically a *horrible* technological solution as it forces the GPU and CPU to share not only a process node, but the very same die--leading to loss in yield.
        It's not accurate to say the must share the same die, however. Intel's first generation (Nehalem) to have them integrated into the CPU package actually had them on a separate die.

        Now, unless you're also imagining that Intel wouldn't generally use its latest process node for the iGPUs, the fact that they're integrated on die isn't a real issue. EUs with defects can be disabled and chips binned, accordingly. I understand you wish Intel would use an older process node on them, but you also wish they were smaller. So, if we accept as given that they use the same process node as the CPU, then combining them onto a single die is almost certainly a net savings.

        Originally posted by willmore View Post
        It also memory starves the GPU as it's stuck breathing through the little straw of a CPU's DRAM interface instead of the much wider and GPU appropriate GDDR bus.
        Sure, but iGPUs are fine for most people. So, the tighter integration is a cost-saving measure.

        Originally posted by willmore View Post
        WRT encode and transcoding, that's a pretty unimportant niche. Given the poor quality of encoding that the hardware solutions provide, you would only use it for local network transcoding--say a PLEX server to another device on the same LAN. You would not use it for storage or remote transfer style transcoding. The quality/bitrate tradeoff of the hardware encoders is just too poor. Software encoding is much higher quality. Hardware transcoding isn't even a good choice for streaming given the poor quality/bitrate of hardware encoders.
        Apparently your knowledge of this subject is badly out of date. While not on par with the best quality available from software encoders at the highest settings, quality hasn't been a real issue with GPU-accelerated encoding for a while.

        And I don't know if you've been videoconferencing during the pandemic, but that and things like game-streaming are prime use cases for hardware-accelerated encoding (and decoding).

        I'll grant that transcoding is a niche not applicable to most consumers, but it's actually a big market in cloud-based video services.

        Comment


        • #44
          Originally posted by coder View Post
          The reason they went inside the CPU package is so they could continue to share its memory, once the memory controller moved on die.
          No, that's a when, not a why. They didn't have to go inside the CPU die at that point as you later mention, they could have stayed in the CPU package and been on another die.

          Originally posted by coder View Post
          It's not accurate to say the must share the same die, however. Intel's first generation (Nehalem) to have them integrated into the CPU package actually had them on a separate die.
          I think you mean to say Arrandale and Clarkdale (hereafter I will just refer to them together as Arrandale) processors using the Nehalem microarchitecture. Not all of the Nehalem processors did such a thing. Most of them did not integrate graphics either on package or on die. Arrandale used two die. One was a classic northbridge with memory controller and graphics (45nm) and the other was the CPU (32nm). This is a fine solution and I don't have any issue with it. It keeps the graphics off the CPU die and allows it to be in a different process--and more importantly, to be a separate yield unit. Note that this was the only time they did this. From here on out, they integrated GPU and CPU on the same die.

          Originally posted by coder View Post
          Now, unless you're also imagining that Intel wouldn't generally use its latest process node for the iGPUs, the fact that they're integrated on die isn't a real issue. EUs with defects can be disabled and chips binned, accordingly. I understand you wish Intel would use an older process node on them, but you also wish they were smaller. So, if we accept as given that they use the same process node as the CPU, then combining them onto a single die is almost certainly a net savings.
          That is exactly what I would imagine. It's what they did in that first generation (Arrandale/Clarkdale) previously mentioned. Given that the GPU and CPU groups in Intel are not combined, it makes sense to do so as each will work with the process best suited to their designes. So, yes, the fact that they are integrated into one die is a big deal. I also take issue with your assertion that combining them on the die isn't a yield issue as Intel could just disable bad EU. Yes, for the GPU, that's feasable. Binning is a cost recovery process and is not something that you want to plan on. You don't want to be selling full sized die as less than fully operational any more than you are forced to. Please review discussions of how companies work very hard to not have to disable/bin any more than they can get away with. Keep in mind that they have to pay for the full die even if they only get to sell it as a partially disabled one. It rapidly becomes cost ineffective to do so. It's much cheaper--as AMD has demonstrated--to separate the failure domains into separate die (or chiplets). This has the added benefit of allowing less critical parts (like the GPU) to be fabricated in different (and most cost appropriate) processes. Again, like the separate I/O die that AMD uses--which is not fabricated in the same process as the CPU chiplets.

          I'm not sure what you're refering to with your use of the word "wish". I have no wishes about Intel. Any feelings I have are for the best interests of consumers like me. Fortunately, we have AMD acting sane to satisfy and 'wish' I could have. I will assert that what Intel is doing isn't for the best of the consumer, but for the best of Intel. If those two align, then that's great, but when they differ, I will call out Intel.

          Originally posted by coder View Post
          Sure, but iGPUs are fine for most people. So, the tighter integration is a cost-saving measure.
          If that were the case then we would only see the integrated designs on the lower end--high volume--chips. But intel does it even up to the i7 and i9. These are the big die parts where integrating the GPU makes very little sense. If Intel limited their integration of the GPU to lower end parts (that 'most people' buy), then we wouldn't be having this discussion.

          Originally posted by coder View Post
          Apparently your knowledge of this subject is badly out of date. While not on par with the best quality available from software encoders at the highest settings, quality hasn't been a real issue with GPU-accelerated encoding for a while.

          And I don't know if you've been videoconferencing during the pandemic, but that and things like game-streaming are prime use cases for hardware-accelerated encoding (and decoding).

          I'll grant that transcoding is a niche not applicable to most consumers, but it's actually a big market in cloud-based video services.
          My knowledge of video transcoding is current. Your assertion is baseless in fact and pejorative. I will point out that we were discussion encoding and transcoding, not decoding alone. Decoding can cheaply be made to be high quality with little hardware and is not of concern. Encoding (and the encoding part of transcoding) is at issue. No, it is not of the same quality as software encoding in terms of quality/bitrate. To assert so is absurd. Little change has been made to the quality of hardware video encoding. The improvements have been related to speed of encoding--which allows for larger resolutions to be encoded in real time. The only improvements in quality/bitrate have come from hardware encoders supporting better codecs (like H.265 instead of H.264), but they still lag behind software encoders of the same codec.

          Cloud based video services aren't using an iGPU to do their work.

          Comment


          • #45
            Originally posted by willmore View Post
            No, that's a when, not a why. They didn't have to go inside the CPU die at that point as you later mention, they could have stayed in the CPU package and been on another die.
            You just shifted your position. Previously, you said they should stay in the motherboard chipset.

            And of course it's a "why"! Once the memory controller moved on-die, putting them in-package was the most practical and cost-effective option!

            Originally posted by willmore View Post
            I also take issue with your assertion that combining them on the die isn't a yield issue as Intel could just disable bad EU. Yes, for the GPU, that's feasable. Binning is a cost recovery process and is not something that you want to plan on. You don't want to be selling full sized die as less than fully operational any more than you are forced to. Please review discussions of how companies work very hard to not have to disable/bin any more than they can get away with. Keep in mind that they have to pay for the full die even if they only get to sell it as a partially disabled one.
            It fills a market niche so important to them that they artificially limit higher-functioning dies just to serve it. If they really cared so much about minimizing every wasted mm^2, then they'd fab more die layouts of different sizes. However, Comet Lake served all of its product tiers via only 2 die layouts: 10-core and 6-core. And Intel's entire range of workstation and server CPUs are traditionally served by only 3 die layouts.

            Originally posted by willmore View Post
            It's much cheaper--as AMD has demonstrated--to separate the failure domains into separate die (or chiplets). This has the added benefit of allowing less critical parts (like the GPU) to be fabricated in different (and most cost appropriate) processes. Again, like the separate I/O die that AMD uses--which is not fabricated in the same process as the CPU chiplets.
            AMD's chiplet strategy is a poor analogy, since it's really about building big CPUs. The fact that they're used in desktop processors is merely a side-effect.

            As proof of this, you can look to their APUs, which use a monolithic die with up to 8 cores and a GPU! With all their chiplet expertise, and indeed even CPU chiplets available to utilize for the task, don't you think they'd have used chiplets in their APUs, if it really made as much sense as you claim? They could've saved significant engineering costs by doing so, which means there must be a more significant upside to the monolithic approach than you think.

            Originally posted by willmore View Post
            intel does it even up to the i7 and i9. These are the big die parts where integrating the GPU makes very little sense.
            At home and at work, we use desktops with i7 & i9 mainstream Intel CPUs and only the integrated graphics. That's enough for running even multiple hi-res monitors. By doing this, we save a couple hundred $ per machine. That's not insignificant.

            Further, I'm aware of appliance-type products that use their iGPUs to offload compute and even transcoding work from the CPU cores.

            Originally posted by willmore View Post
            Your assertion is baseless in fact and pejorative.
            ...if only you could harness that righteous indignation for the sake of good.


            Originally posted by willmore View Post
            Cloud based video services aren't using an iGPU to do their work.
            Apparently enough of them to create a market for two generations of their Visual Compute Accelerator product, which featured 3x i7-class CPUs on a single PCIe card:The only reason they're not building yet another generation is that they now have an equivalent built with Xe-class GPUs:This is the same GPU found in their Tiger Lake laptop CPUs, but without the CPU cores. That should serve as a testament to the demand by hyperscalers for the video features in their iGPUs, because it doesn't make a lot of sense for most GPU-compute or AI workloads.

            Comment

            Working...
            X