Announcement

Collapse
No announcement yet.

Intel Integrated Graphics Performance From Gen9 To Meteor Lake Arc Graphics

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by qarium View Post
    i don't think that for this type of SOC there is a need for a chiplet design. a monolitic design looks better to me.
    That's the only way they can have parts of chips on different processes. Desktop Arc is made on N6, the GPU tile is on N5. They would need to port the design to use the Intel 4 libraries in order to have it on a monolith die where the CPU is also on Intel 4. It makes sense to not have to maintain two versions of Arc that can be made at two fabs.

    But they're also using tiles for the same reason that AMD does. The SOC and I/O dies are on 6nm because it's an older, more mature process, the does contain PHYs which tend not to shrink as well, and the hardware blocks on them don't need clock speeds to be as aggressive as a GPU or CPU. They can also re-use these tiles on other designs.

    Originally posted by qarium View Post
    if you say it is the same shader numbers and same architecture than the Intel ARC A380
    "Process Size: 6 nm. Transistors: 7,200 million. Density: 45.9M / mm². Die Size: 157 mm²."

    so we have a die shrink to 5nm but its save to say that this iGPU has something like 7,2 million transistors.
    Yes but, like I said, the A380 includes the display, audio, and media engines as well as the PHYs and controllers for PCI-E and the memory. The GPU tile in the 155H is only the shader cores and cache, all of the aforementioned things have been moved into the SOC die because they either don't shrink as well, don't need to be clocked as aggressively, or they don't need to scale so they can be re-use across products.

    In other words, the GPU tile is quite a bit smaller than the A380.



    The A380 annotated die shot is on the right. So just imagine cutting off the right side and the bottom then reconfiguring what's left to be more verticle, then shrink it to N5. That would be the GPU tile.

    Comment


    • #32
      Originally posted by qarium View Post

      true but amd will just put in faster ram and will increase the clock speed a little and then they have the same performance

      and intel will use a little less power. but most of the people will not care about this if the price is cheaper.

      and then in less than 5 month we have RDNA4 gpu on the AMD side
      I already said this but the 7840U GPU already clocks way higher than the 155H GPU and both can use faster RAM then they have here. Both can use 7400MT LPDD5X on a 12. AMD doesn't make laptops, the RAM used was chosen by the laptop maker. Both can run at up to 100Mhz faster, too.

      Comment


      • #33
        Originally posted by coder View Post
        That's what we're doing.

        Intel's previous generation was behind AMD, and now they've leapfrogged AMD, in spite of being at a slight node disadvantage and using less power!




        And that doesn't even take into account some of the vkpeak benchmarks which show Intel's scalar performance being up to 70% faster than AMD's. Though I wish there were perf/watt metrics for those, too.

        Comment


        • #34
          Originally posted by coder View Post
          Intel's previous generation was behind AMD, and now they've leapfrogged AMD
          Leapfrogging seems like a major stretch. They're basically competitive with one another. Granted, for the first time in maybe forever. But leapfrogging? That indicates a major advantage to me.

          The difference may be in how much weight we give to synthetic benchmarks, where Intel does have an advantage. I just give zero weight to those, and focus on real applications instead.

          Comment


          • #35
            Originally posted by smitty3268 View Post
            Leapfrogging seems like a major stretch. They're basically competitive with one another. Granted, for the first time in maybe forever. But leapfrogging? That indicates a major advantage to me.
            The results I showed in the above post reflect a 7.5% average benefit. The difference between AMD's 680M and 780M iGPUs is only about 15%. So, Intel beat AMD by half as much as AMD's own generational improvement. Call what you want, as long as you don't say it's insignificant.

            Originally posted by smitty3268 View Post
            ​The difference may be in how much weight we give to synthetic benchmarks, where Intel does have an advantage. I just give zero weight to those, and focus on real applications instead.
            If you don't like Michael's benchmark suite, then why not suggest to him which things to drop or add?

            Also, please share examples of the Meteor Lake benchmarks you're seeing, which show anything different than what Michael found.

            Comment


            • #36
              Originally posted by coder View Post
              So, Intel beat AMD by half as much as AMD's own generational improvement. Call what you want, as long as you don't say it's insignificant.
              Why? Being significant is subjective and relative evaluation in the context of individual expectations and requirements. For the most consumers in the most workloads 7.5% difference is going to be practically imperceivable. Why should any of those consumers call it significant advantage?
              Last edited by drakonas777; 30 December 2023, 03:18 PM.

              Comment


              • #37
                Originally posted by drakonas777 View Post
                Why? Being significant is subjective and relative evaluation
                Yes, and for Intel to surpass AMD by half of their generational improvement, while using less energy and being on an inferior node is not an insignificant achievement, in most reasonable interpretations.

                Originally posted by drakonas777 View Post
                For the most consumers in the most workloads 7.5% difference is going to be practically imperceivable.
                Nobody is saying you should upgrade from Phoenix to Meteor Lake, or that GPU performance is the sole reason to choose Meteor Lake. An advantage doesn't have to be overwhelming to still count as an advantage.

                Comment


                • #38
                  Originally posted by coder View Post
                  AMD's Radeon 780M was, itself, a big generational jump in performance. For Intel to catch up to it would be like AMD releasing a RX 8900 XT that matches Nvidia's RTX 5090. I think that would be pretty impressive.
                  not going to happen as AMD killed off big RDNA4. RDNA4 going for "midrange" now. 7900 xtx level of performance being the max but more power efficient and bugs fixed to allow for higher clocks without black screens and artifacts that plaqued RDNA3.

                  AMD refocused their efforts on RDNA5 to be competitive again at the highend. At least according to moores law is dead

                  Comment


                  • #39
                    Originally posted by middy View Post
                    not going to happen as AMD killed off big RDNA4. RDNA4 going for "midrange" now.
                    I've heard the rumors. Sad. AMD will probably regret doing that, but I know there finances were looking a little rough.

                    Comment


                    • #40
                      Originally posted by coder View Post
                      to surpass AMD by half of their generational improvement
                      This is a made up "metric" by you. Non-argument whatsoever.

                      Originally posted by coder View Post
                      while using less energy


                      For the whole SoC. Assumption that Intel iGPU itself is more power efficient than AMD iGPU is a pure speculation for now.

                      Originally posted by coder View Post
                      and being on an inferior node
                      ​​

                      Slightly inferior. Let's not make an impression its'a a full node disadvantage.

                      Originally posted by coder View Post
                      not an insignificant achievement, in most reasonable interpretations.
                      ​​

                      For me - not really. You could make an argument that 780M is already an impressive iGPU, so being able to surpass it even by 7.5% is also impressive, and ability to achieve this is a significant achievement. I don't think that there is something fundamentally wrong in this way of thinking, but I see at it from a different perspective. For me, 780M is a good iGPU, but I don't consider it to be particularly impressive. As a consequence I do not consider ML iGPU impressive either. Also, I don't consider 7.5% a big advantage. At the end of the day I have no intentions to argue about semantics or subjectivity, so I respect other opinions it's impressive. I just don't feel the same way after seeing the benchmarking data.

                      Originally posted by coder View Post
                      An advantage doesn't have to be overwhelming to still count as an advantage.


                      Nobody says it's not an advantage. I don't think that 7.5% advantage is significant advantage - that's all.​ Perhaps 7.5% is a lot for you for some reason and you think differently - that's fine.

                      Comment

                      Working...
                      X