Announcement

Collapse
No announcement yet.

Intel To Split Off Their Old Haswell/Broadwell Vulkan Code Into Separate Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    My biggest issue with these generational messages/classifications is, that I don't know what it means for Atoms (E-cores in today's parlance).

    Everything Sandy-Bridge to Broadwell (P-cores-only for newcomers) which I still operate, has always worked with discrete GPUs, either because they didn't have an iGPU (Xeon E5/Xeon-D) or because it wasn't attractive enough (Xeon E3).

    With Skylakes I still use Iris Plus based notebooks and while I don't play games with them, I want my Google maps and all those eye-candy compositors to work just fine at my native 4k.

    To my understanding everything Sandy/Ivy/Haswell might have been BETA in terms of 3D and there might be issues of features lacking or defect in functional scope.

    Everything Broadwell to Comet Lake is pretty much feature-complete and only differs in terms of scale or EUs from "please don't think 3D"-4EUs on J1900 Atoms to Iris Pro Graphics 580 on an i7-6785R with 72EUs and 128MB of eDRAM to make it fly.

    That's why a cut just below Xe and Tiger Lake seems to make more sense for the Core CPUs, or perhaps below Haswell, because Sandy/Ivy were still too broken.

    Surviving machines with these bigger cores are either mostly workstations or servers, both of which won't use the iGPUs. For them supporting the latest games an APIs isn't as critical as a desktop that works.

    The same holds true for the Atoms, but the question is: what does a functional cut at Haswell (2013) translate to--on the Atoms side--which may be embedded and totally depends on iGPU support?

    When Jasper Lakes Atoms--which have just become available this year--are subsumed as Haswell generation iGPUs, that is a problem. Perhaps anything below Silvermont may be considered pathetic and truly outdated. But Silvermont hardware reached a performance threshold, which has them still drive quite a bit of critical infrastructure. Goldmonts may be less than Haswell in terms of CPU instruction set support or iGPU scope, but they are still being sold and installed today.

    Comment


    • #22
      Originally posted by abufrejoval View Post
      My biggest issue with these generational messages/classifications is, that I don't know what it means for Atoms (E-cores in today's parlance).
      According to wikipedia, Silvermont has Gen 7 graphics which corresponds to Ivy Bridge and would be the earliest Intel architecture to have a Vulkan driver although it's only partially complete. It would be part of this split.

      Goldmont has Gen 9 graphics (similar to Skylake) which would remain in the current driver.

      With Skylakes I still use Iris Plus based notebooks and while I don't play games with them, I want my Google maps and all those eye-candy compositors to work just fine at my native 4k.​
      To be clear, this change is only about the Vulkan drivers. Desktop compositors and browsers just use OpenGL which isn't going to be affected at all, and neither will video acceleration.
      Last edited by smitty3268; 26 August 2022, 04:05 AM.

      Comment


      • #23
        So until someone decides to rewrite Wayland in Vulkan, Atoms will continue to be safe to use: thanks for pointing that out, far too easy to overlook that with all this EOL going on with perfectly adequate hardware.

        That contains the risk on the CPU side, where e.g. KVM considers Goldmont Atoms purchased two years ago as 13 year old Westmeres and RHV 4.4 no longer runs on them, because that's obviously too old...

        Comment


        • #24
          I was quite shocked to see my 9.5th Gen Coffee Lake iGPU getting legacy'd for the Windows driver already. I bought a brand new laptop with a 8300H CPU just barely 2 years ago. On the other hand Nvidia, who everyone shits on all the time is still supporting 10+ year old GPUs in their main driver.

          Comment


          • #25
            I feel with you! But then "legacy" doesn't quite mean end-of-life.

            That they are no longer receiving optimizations for the newest Windows games, may be of little consequence. It's security and basic "2D mostly" screen support that counts. Unfortunately 2D these days can rely on 3D for a lot of its nice looks and performance. And it could have critical vulnerabilities... Let's see where that takes us!

            tl;dr

            In my case it was NUC8 with the double sized 48EU Iris 655 with 128MB of eDRAM that caught my fancy with an i7-8569U at practically zero uplift vs. ordinary 24 EU HD iGPUs. I then found out that this really complex "Apple custom" silicon only delivered a 50% uplift vs. the 24 EU iGPUs at probably quite a bit of extra production cost, that wasn't passed on to Apple nor NUC customers. Anybody else either wasn't offered the chip or getting a prohibitive quote, in any case there weren't any ~Apple/Intel design wins.

            I then switched another order for this NUC8 to a NUC10 based on the i7-10700U hexacore with the ordinary 24EU iGPU, because they were actually going to be µ-servers mostly.

            And a couple of months later I got lucky to spot a Tiger Lake based i7-1165G7 NUC11 with the 96EU Xe iGPU that doesn't use eDRAM for the markup of an extra Atom.

            The three are normally operating an oVirt/RHV cluster so GPU performance does not matter. But obviously I had to bench them a bit with both Windows and Linux, because a) I'm curious, b) you never know how they'll be recycled (I am currently running them with 64GB of DDR4 RAM, 1TB of NVMe SSD and a 10Gbase-T Thunderbolt Ethernet NIC in said oVirt/RHV cluster using Gluster storage).

            And it turns out that the 96EU Xe iGPU scales linearly with EU count, delivering 4x the i7-10700U iGPU performance even without the help of eDRAM, while the 48EU iGPU with 128 MB of eDRAM only gets 50% of extra power from all that extra effort. DDR4 DRAM bandwidth between all three changes very little (~40GB/s on all three, even if the timings go DDR4-2400/2666/2933), so there must be some real [cache?] magic in Xe to scale without DRAM bandwidth benefits.

            Looking at these three basically allows me to follow the battle for iGPU power that was going on between Apple and Intel for iMacs: never an issue of my own (I considered the IBM-PC the only true Apple ][ heir).

            But to cut a long story slightly shorter: none of them were fit for gaming! And even Atoms these days no longer suck with 2D@4k, at least with Jasper Lake (60Hz!).

            On the CPU side 4 cores of Tiger Lake deliver pretty much exactly the same compute power of 6 cores of Cannon Lake, while scalar IPC went up near that 4/6 ratio, too, at identical TDPs.

            I don't recommend running NUCs with default >60 Watt TDP, because it turns those tiny fans into noisy turbines.

            Comment

            Working...
            X