Announcement

Collapse
No announcement yet.

Intel UHD Graphics 770 / Alder Lake GT1 Linux Graphics Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by WolfpackN64 View Post
    Intel's iGPU performance has come a long way, but it seems they still have a long way to go.
    You have to consider that Intel's 32 EU iGPU has only the equivalent compute power of a 4 CU AMD iGPU.

    The EUs in Intel iGPUs have an effective fp32 SIMD width of 8, whereas the CUs in AMD's iGPUs have an effective SIMD width 64. So, a 32 EU iGPU has only the equivalent of 256 of AMD's "shaders" or what Nvidia misleadingly calls "cores". Meanwhile, AMD iGPUs with 8 CUs (like the one tested here) have 512 "shaders". Given that, Intel's performance is surprisingly strong!

    Now, if you want to see what an Intel iGPU is really capable of, go look for benchmarks of their Tiger Lake iGPUs with 96 EUs. Those consistently beat AMD APUs, in graphics benchmarks.

    Comment


    • #32
      Originally posted by mikkl View Post
      Performance is great for only 32 EUs, the difference to 11900k is much higher than the 19% GPU clock speed increase in many of the tests.
      Good call. I hadn't noticed that, but you're right. I wonder what else Intel improved in them. There must've been some IPC improvements, or maybe it's all down to cache & memory subsystem?

      Comment


      • #33
        Originally posted by Michael View Post

        Most of the heavier games are too slow to be practical right now, at least until more driver optimizations are out... Hopefully the Intel Mesa folks have a few tricks up their sleeve still.
        It would be nice to have these Intel GPU/CPU test on Clear Linux. It is supposed to be Intel optmized so we will see if they make any more for Alder Lake...

        Comment


        • #34
          Originally posted by bug77 View Post
          It would be interesting to see DDR5 vs DDR4, because graphics tend to be latency sensitive...

          Also kudos to Intel for finally realizing there's no point in wasting >30% of the die area for GPU in a desktop part.
          Quite the opposite, graphics are ram bandwidth intensive and not latency. Also, many people would love to buy a desktop part with less cpu cores and more gpu cores.

          Comment


          • #35
            Originally posted by coder View Post
            You have to consider that Intel's 32 EU iGPU has only the equivalent compute power of a 4 CU AMD iGPU.

            The EUs in Intel iGPUs have an effective fp32 SIMD width of 8, whereas the CUs in AMD's iGPUs have an effective SIMD width 64. So, a 32 EU iGPU has only the equivalent of 256 of AMD's "shaders" or what Nvidia misleadingly calls "cores". Meanwhile, AMD iGPUs with 8 CUs (like the one tested here) have 512 "shaders". Given that, Intel's performance is surprisingly strong!

            Now, if you want to see what an Intel iGPU is really capable of, go look for benchmarks of their Tiger Lake iGPUs with 96 EUs. Those consistently beat AMD APUs, in graphics benchmarks.
            GPU compute units are only a part of the equation: memory bandwidth plays an important role, plus there are other things that often are left behind in reviews but very important like raster operators and texture units.
            Comparing numbers that way is like saying that 16 chestnuts are equivalent to 4 apples.

            The only common ground is the rough efficiency, despite still depends on the productive process.

            Comment


            • #36
              Originally posted by blackshard View Post
              GPU compute units are only a part of the equation: memory bandwidth plays an important role,
              Yes, we know. However, Tiger Lake manages to put up some impressive numbers with 3x EUs of this GPU, and only LPDDR4 memory (i.e. not even DDR5). That shows Intel didn't hit a wall with only 32 EUs.

              Originally posted by blackshard View Post
              plus there are other things that often are left behind in reviews but very important like raster operators and texture units.
              The presumption is that these guys know what they're doing and added texture units and ROPs roughly in proportion to the amount of compute. This makes compute a good first order approximation of a GPU's power. The first two numbers of a GPU to examine are its raw compute performance and memory bandwidth. In the case of AMD's Infinity Cache, I'd probably look at bandwidth to it, rather than to memory.

              Originally posted by blackshard View Post
              The only common ground is the rough efficiency, despite still depends on the productive process.
              You can see how efficient an implementation is by looking at its benchmark scores normalized by its raw compute performance. That tells you about architectural efficiency and correlates to area-efficiency.

              Comment


              • #37
                Thank you Micheal for this article! 👍️

                I find the GPU performance of Alder Lake rather disappointing compared to old AMD Ryzen, I had hoped Intel would have something better. Also the Alder Lake consumes huge amount of energy and is difficult to cool.
                Also it is impossible to find DDR5 memory, so you can forget about those, until at least a year.

                Comment


                • #38
                  Originally posted by uid313 View Post
                  I find the GPU performance of Alder Lake rather disappointing compared to old AMD Ryzen, I had hoped Intel would have something better.
                  Lotsa posts in this thread already covered the underlying reason, which is that Intel just decided not to devote much silicon to the iGPU in its desktop chips. The one in Tiger Lake is the same architectural generation and made on a similar process node, but much bigger. It outperforms the 8 CU Vega in these APUs pretty easily.

                  Comment


                  • #39
                    Originally posted by coder View Post
                    Lotsa posts in this thread already covered the underlying reason, which is that Intel just decided not to devote much silicon to the iGPU in its desktop chips. The one in Tiger Lake is the same architectural generation and made on a similar process node, but much bigger. It outperforms the 8 CU Vega in these APUs pretty easily.
                    Yeah, I read that, and it seems the Intel Xe graphics is great, it's just too bad Intel didn't make a CPU that gave us with great iGPU performance.

                    Comment


                    • #40
                      Originally posted by coder View Post
                      Good call. I hadn't noticed that, but you're right. I wonder what else Intel improved in them. There must've been some IPC improvements, or maybe it's all down to cache & memory subsystem?
                      I remember reading somewhere that they added ALUs as well, so it's possible that the old 8:1 ration between EU's and CU's no longer applies.

                      Haven't seen any real details yet though, and Intel still describes a Vector Engine (EU replacement) as 256-bit ie 8 32-bit FP, so at first glance it's either same number of ALUs or doubled up for parallel execution.

                      The performance results suggest that a VE/EU is still roughly 1/8th of a CU, ie that the Alder Lake GPU is roughly half the throughput of the Vega 8 in Renoir / Cezanne.
                      Last edited by bridgman; 07 November 2021, 01:01 AM.
                      Test signature

                      Comment

                      Working...
                      X