Announcement

Collapse
No announcement yet.

Intel's New Iris Driver Gets Speed Boost From Changing The OpenGL Vendor String

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by coder View Post
    Is this based on anything more than pure speculation and assumptions? AFAIK, Intel has said absolutely nothing about the architecture of their Xe GPUs.
    It's highly unlikely that they make a completely new GPU architecture, their iGPU designs don't suck

    Comment


    • #12
      Originally posted by coder View Post
      Is this based on anything more than pure speculation and assumptions? AFAIK, Intel has said absolutely nothing about the architecture of their Xe GPUs.
      I've heard that the starting point for them are the Gen11 designs, but of course we're talking about unreleased hardware there, so anything can happen.

      Comment


      • #13
        Originally posted by coder View Post
        Is this based on anything more than pure speculation and assumptions? AFAIK, Intel has said absolutely nothing about the architecture of their Xe GPUs.
        It's not a farfetched assumption or anything - it's pretty much the minimum requirement to make a dGPU.

        iGPUs uses system RAM and don't have dedicated VRAM. Intel's Iris Pro CPUs do contain a small amount of dedicated VRAM, which have shown huge performance improvements over the average iGPUs.

        iGPUs also usually have a smaller amount of shader units/execution units/cores etc. (whatever terminology you want to use), since they're not meant to run high end games, they just need to render the desktop and typical desktop applications at 60+ FPS for the next 6-7 years, which is a pretty low bar (and also for reduced power consumption which is super important in laptops, tablets and in some cases for desktops as well). They're intentionally low performance.

        Obviously, to make any kind of competitive dGPU, you need to add a good amount of dedicated VRAM and a lot more shader units/execution units/cores etc. to match the competition. This is the minimum requirement to make a dGPU. So there's nothing to be proved, no speculation or assumptions. They've just stated the plain and obvious truth.

        Comment


        • #14
          Originally posted by starshipeleven View Post
          It's highly unlikely that they make a completely new GPU architecture, their iGPU designs don't suck
          I didn't say it was implausible. I asked if it was based on any firm information. I don't think I could've been much clearer, yet you seem to have completely missed the point. dos1 made a rather strong assertion and I simply wanted to know if it was based on any news I'd missed.

          As for your and dos1 's speculation it'll simply be more of the same, the scale of their Xe effort is certainly big enough to be a break with their HD Graphics architecture. They announced products in 2017 that won't ship until 2020, and it's pretty clear the effort was well underway, before then. They even created an entirely new division of the company to work on graphics products and related accelerators. That's the scale of time and resources you'd need to do a full re-design.

          https://www.anandtech.com/show/12017...hief-architect

          Just to be clear (which seems to be an issue, with you), I'm not saying they will redesign - I'm saying we simply don't know. It's certainly possible they're doing a root-and-branch redesign. I believe their current architecture at least needs significant reworking, in order to scale up, efficiently.

          Comment


          • #15
            Originally posted by dos1 View Post
            I've heard that the starting point for them are the Gen11 designs, but of course we're talking about unreleased hardware there, so anything can happen.
            Heard where? Please cite your source!

            Comment


            • #16
              Originally posted by sandy8925 View Post
              Intel's Iris Pro CPUs do contain a small amount of dedicated VRAM, which have shown huge performance improvements over the average iGPUs.
              And, of course, the fact they had 2-3x as many EUs as their other iGPUs had nothing to do with it.

              Originally posted by sandy8925 View Post
              iGPUs also usually have a smaller amount of shader units/execution units/cores etc. (whatever terminology you want to use), since they're not meant to run high end games, they just need to render the desktop and typical desktop applications at 60+ FPS for the next 6-7 years, which is a pretty low bar (and also for reduced power consumption which is super important in laptops, tablets and in some cases for desktops as well). They're intentionally low performance.

              Obviously, to make any kind of competitive dGPU, you need to add a good amount of dedicated VRAM and a lot more shader units/execution units/cores etc. to match the competition. This is the minimum requirement to make a dGPU. So there's nothing to be proved, no speculation or assumptions. They've just stated the plain and obvious truth.
              I simply asked if dos1 had a source or was speculating. How on earth does your brain translate this into "@coder needs me to explain GPUs"?? I didn't ask how or why dos1 could think such a thing - just if it was based on any firm information.

              I read their whitepapers on Gen8, Gen9, and Gen11, and probably know a good deal more about their GPUs than you do. I've similarly read all of AMD's whitepapers, and deep dives on many of Nvidia's recent GPUs. I've possibly even written more OpenGL and OpenCL code than you have. What I need is news; sources. That's what I asked for. That's all I asked for.

              Comment


              • #17
                Actually, since you all seem to be speculating, I can also play that game.

                I think that, if all Intel were doing is just scaling up their HD Graphics, adding some external GDDR (or HBM2) memory, and slapping it on a PCIe card, that shouldn't take them 3+ years. That's why I'm reflexively skeptical they're not doing at least a slightly more fundamental reworking.

                Furthermore, as I've said, I think their current architecture doesn't actually scale up very well. Gen11 makes some needed changes, so it does take them in the right direction. Still, those EUs are far too narrow. It's probably going to burn a fair bit more power per FLOPS than a comparable AMD iGPU.

                Edit: lest you accuse me of being some kind of Intel iGPU hater, I'll confess I actually like how they went with such a narrow design. Having 24 EU's (168 threads) in a desktop iGPU makes it more amenable to highly-parallel tasks that aren't strictly numerical. I wish this were more commonly exploited, but most people seem to be using GPU Compute for numerically-intensive (esp. floating point) applications. I'd love to see something like LLVM ported to run on Intel's current-gen iGPUs.
                Last edited by coder; 20 April 2019, 12:56 PM.

                Comment


                • #18
                  Originally posted by sandy8925 View Post
                  Obviously, to make any kind of competitive dGPU, you need to add a good amount of dedicated VRAM and a lot more shader units/execution units/cores etc. to match the competition.
                  I think you forgot about heat output. If you put both a powerful GPU and a CPU on the same die, I wouldn't be surprised to see TDPs around 300W (desktop CPUs are ~100W, GPUs are ~200W) in desktop chips. Such a chip would make i9 look like a joke, and FX-9590 would look like the coldest thing in the universe. Not to mention, you'd lose a low-power GPU that can be used for basic desktop applications, video decoding (and maybe encoding), as well as running a compositor. There's a reason why integrated GPUs are weak, and having a high power one separate makes a lot of sense.

                  Comment


                  • #19
                    Guest - True, but AMD's iGPUs have always performed better than Intel's iGPUs, and AFAIK with similar levels of power consumption.

                    Comment


                    • #20
                      Originally posted by coder View Post
                      That's the scale of time and resources you'd need to do a full re-design.
                      Eeeeh, not really. Making something new from scratch is not really that easy if they are targeting performance GPUs for x86. That's barely enough time for embedded graphics that aren't really in the same ballpark.

                      I'd say that it is the time needed to evolve their current iGPU design into something that makes sense as a standalone GPU.

                      They can't just pull the same iGPU modules, copy-paste them a few thousand times and add a PCIe controller (and expect any serious power out of it), but they aren't going to throw out their HD architecture either.

                      Comment

                      Working...
                      X