Announcement

Collapse
No announcement yet.

Intel's Linux Graphics Driver Continues With Multi-Tile Preparations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel's Linux Graphics Driver Continues With Multi-Tile Preparations

    Phoronix: Intel's Linux Graphics Driver Continues With Multi-Tile Preparations

    In addition to Intel's open-source Linux graphics driver developers being quite busy preparing for upcoming Intel Arc "Alchemist" (DG2) graphics cards on the consumer side, they have concurrently been preparing for Xe HP "Ponte Vecchio" hardware too. One of the big undertakings on that side from the driver perspective is bringing up multiple tiles...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    So much glue ;-)

    Comment


    • #3
      Weird seeing Intel not only come out with a multi-die GPU first (well, maybe) but to come out with one at all. Not that I'm complaining, I think such a technology is the only way forward.
      Since Intel is mostly starting from scratch with their GPUs, I imagine it's a lot easier for them to implement such a technology. For AMD and Nvidia, I'm sure it would require a lot of reworking of their drivers.

      Originally posted by uxmkt View Post
      So much glue ;-)
      Indeed.

      Comment


      • #4
        You shouldn't be surprised considering they've been working on the tech for years.

        Comment


        • #5
          Originally posted by schmidtbag View Post
          Since Intel is mostly starting from scratch with their GPUs, I imagine it's a lot easier for them to implement such a technology.
          Why do people keep saying this crap? Intel is building on a decade+ of their iGPUs, in terms of drivers, tools, and userspace! The biggest clue is right there in the name: i915 driver!

          This is evolution, not revolution.

          Comment


          • #6
            Originally posted by coder View Post
            Why do people keep saying this crap? Intel is building on a decade+ of their iGPUs, in terms of drivers, tools, and userspace! The biggest clue is right there in the name: i915 driver!

            This is evolution, not revolution.
            That's like saying "eggplant is an egg that is also a plant - it's right there in the name!"
            It's part of the same driver but architecturally, it's hardly similar, if at all. So yes, it is pretty much a revolution. Intel is just keeping things simple by cramming everything under the same driver name. Even before Xe/Gen12, Intel has made some significant transformations over the years. Those would be considered evolutionary, but in that regard, that's like comparing an eagle to a velociraptor.

            At the end of the day, they're just an array of transistors configured in a way to use structures and instructions that have been tried and proven, even by competitors. So, if you want to get pedantic about evolution vs revolution, well, then seeing as Raja was involved, this is likely some evolution of GCN too.

            Comment


            • #7
              Intel has to do this, Nvidia too. Because if the rumours are true AMD will have multi die this or next year on their GPUs. It makes perfect sense.

              Comment


              • #8
                Originally posted by schmidtbag View Post
                architecturally, it's hardly similar, if at all.
                It's probably not more of a change than AMD did from GCN to RDNA. So, I guess you consider AMD to have "mostly started from scratch with RDNA"?

                Nothing about Intel's GPU effort is remotely starting from scratch. They revised stuff, and that's fine. We'd expect nothing less. But their entire effort sits atop a mountain of stuff they didn't rewrite. Not least, a massive software effort. No doubt > 90% of the codepaths in their driver, userspace, and probably even their toolchain are the same.

                Even at the hardware level, I'm sure they reused more than than not. Probably much more. Even if they totally rebuilt the shader cores, a GPU has texture engines, ROPs, media engines, memories, buses, caches, display controller, microcontrollers, etc. Intel's graphics group has been building GPUs for a long time, and they seem to do a good job of reusing everything they can, from one generation to the next.

                Companies are always trying to play up how their latest, greatest product is super new & different. No doubt it has more changes than probably any single one of their GPUs had, previously. That's still a far cry from "mostly starting from scratch".

                Originally posted by schmidtbag View Post
                then seeing as Raja was involved, this is likely some evolution of GCN too.
                At Intel, he is not a hands-on architect and he's still bound by NDAs with AMD. He isn't allowed to use any AMD IP.
                Last edited by coder; 15 January 2022, 06:19 AM.

                Comment


                • #9
                  Originally posted by coder View Post
                  It's probably not more of a change than AMD did from GCN to RDNA. So, I guess you consider AMD to have "mostly started from scratch with RDNA"?

                  Nothing about Intel's GPU effort is remotely starting from scratch. They revised stuff, and that's fine. We'd expect nothing less. But their entire effort sits atop a mountain of stuff they didn't rewrite. Not least, a massive software effort. No doubt > 90% of the codepaths in their driver, userspace, and probably even their toolchain are the same.
                  I'm talking more about the hardware than the drivers. Sure, most of the codepaths are the same, but you can also say that of everything using Mesa. Why else do you think Nouveau has such good compatibility? But that 10% is what matters most. If that is not complete or optimized, the GPU isn't worth using. The architectural differences have to be accommodated for here, and if you weren't a pedant, you'd understand that this is the part of the driver that takes a hell of a lot of work.
                  Nobody said the entire Mesa driver has to be written from scratch, that'd be stupid.
                  Even at the hardware level, I'm sure they reused more than than not. Probably much more. Even if they totally rebuilt the shader cores, a GPU has texture engines, ROPs, media engines, memories, buses, caches, display controller, microcontrollers, etc. Intel's graphics group has been building GPUs for a long time, and they seem to do a good job of reusing everything they can, from one generation to the next.

                  Companies are always trying to play up how their latest, greatest product is super new & different. No doubt it has more changes than probably any single one of their GPUs had, previously. That's still a far cry from "mostly starting from scratch".
                  Yes, hence my last point: all those structures you mentioned are found in non-Intel hardware too and many of them likely work pretty much the same to competitors. They're probably not identical, but, many of them are very unlikely to be identical from Gen11 graphics too. When a big car manufacturer creates a new platform from the ground up, are they starting from scratch? Are not not literally reinventing the wheel? Are they not tweaking some of the designs that were tried and true? They would call it a "ground-up redesign" (means pretty much the same thing) but is it literally that? Probably not. There's only so much you can change until it just simply doesn't work. Same goes with processors.
                  When AMD released Bulldozer, it was considered "new from the ground up" but by your semantics, that isn't true either.

                  So yeah, I'll accept that saying "mostly starting from scratch" is technically inaccurate. But your little tirade just shows you as being insufferably literal. You yourself asked "why do people keep saying this" but that's because most people know "starting from scratch" is a hyperbole, or in this case, a figure of speech. It's really not something to get so worked-up over. The fact of the matter is, it's not some incremental upgrade over Gen11. That's what matters in this context.

                  Comment


                  • #10
                    Originally posted by schmidtbag View Post
                    When a big car manufacturer creates a new platform from the ground up, are they starting from scratch?
                    Not the best analogy. They have suppliers & supply chains to manage, and need to distribute repair parts as well. Not to mention the tooling at the factories, retraining assembly line workers, retraining repair mechanics, etc. It's not only the engineering time & resources they're optimizing for, when they carry over certain parts and sub-assemblies from prior generations or other models.

                    Originally posted by schmidtbag View Post
                    most people know "starting from scratch" is a hyperbole,
                    I've seen enough cases where people seem to somehow forget about the iGPUs that I don't assume that.

                    Originally posted by schmidtbag View Post
                    It's really not something to get so worked-up over.
                    LOL.

                    Originally posted by schmidtbag View Post
                    The fact of the matter is, it's not some incremental upgrade over Gen11. That's what matters in this context.
                    It's more than the usual increment. In CPU parlance, it's more of a Golden Cove than a Willow Cove. More of a Sandy Bridge than a Nehalem. In GPU terms, I think RDNA probably is a good analogy.

                    Comment

                    Working...
                    X