Originally posted by coder
View Post
Announcement
Collapse
No announcement yet.
Intel's New Iris Driver Gets Speed Boost From Changing The OpenGL Vendor String
Collapse
X
-
Originally posted by coder View PostIs this based on anything more than pure speculation and assumptions? AFAIK, Intel has said absolutely nothing about the architecture of their Xe GPUs.
- Likes 1
Comment
-
Originally posted by coder View PostIs this based on anything more than pure speculation and assumptions? AFAIK, Intel has said absolutely nothing about the architecture of their Xe GPUs.
iGPUs uses system RAM and don't have dedicated VRAM. Intel's Iris Pro CPUs do contain a small amount of dedicated VRAM, which have shown huge performance improvements over the average iGPUs.
iGPUs also usually have a smaller amount of shader units/execution units/cores etc. (whatever terminology you want to use), since they're not meant to run high end games, they just need to render the desktop and typical desktop applications at 60+ FPS for the next 6-7 years, which is a pretty low bar (and also for reduced power consumption which is super important in laptops, tablets and in some cases for desktops as well). They're intentionally low performance.
Obviously, to make any kind of competitive dGPU, you need to add a good amount of dedicated VRAM and a lot more shader units/execution units/cores etc. to match the competition. This is the minimum requirement to make a dGPU. So there's nothing to be proved, no speculation or assumptions. They've just stated the plain and obvious truth.
Comment
-
Originally posted by starshipeleven View PostIt's highly unlikely that they make a completely new GPU architecture, their iGPU designs don't suck
As for your and dos1 's speculation it'll simply be more of the same, the scale of their Xe effort is certainly big enough to be a break with their HD Graphics architecture. They announced products in 2017 that won't ship until 2020, and it's pretty clear the effort was well underway, before then. They even created an entirely new division of the company to work on graphics products and related accelerators. That's the scale of time and resources you'd need to do a full re-design.
https://www.anandtech.com/show/12017...hief-architect
Just to be clear (which seems to be an issue, with you), I'm not saying they will redesign - I'm saying we simply don't know. It's certainly possible they're doing a root-and-branch redesign. I believe their current architecture at least needs significant reworking, in order to scale up, efficiently.
Comment
-
Originally posted by sandy8925 View PostIntel's Iris Pro CPUs do contain a small amount of dedicated VRAM, which have shown huge performance improvements over the average iGPUs.
Originally posted by sandy8925 View PostiGPUs also usually have a smaller amount of shader units/execution units/cores etc. (whatever terminology you want to use), since they're not meant to run high end games, they just need to render the desktop and typical desktop applications at 60+ FPS for the next 6-7 years, which is a pretty low bar (and also for reduced power consumption which is super important in laptops, tablets and in some cases for desktops as well). They're intentionally low performance.
Obviously, to make any kind of competitive dGPU, you need to add a good amount of dedicated VRAM and a lot more shader units/execution units/cores etc. to match the competition. This is the minimum requirement to make a dGPU. So there's nothing to be proved, no speculation or assumptions. They've just stated the plain and obvious truth.
I read their whitepapers on Gen8, Gen9, and Gen11, and probably know a good deal more about their GPUs than you do. I've similarly read all of AMD's whitepapers, and deep dives on many of Nvidia's recent GPUs. I've possibly even written more OpenGL and OpenCL code than you have. What I need is news; sources. That's what I asked for. That's all I asked for.
Comment
-
Actually, since you all seem to be speculating, I can also play that game.
I think that, if all Intel were doing is just scaling up their HD Graphics, adding some external GDDR (or HBM2) memory, and slapping it on a PCIe card, that shouldn't take them 3+ years. That's why I'm reflexively skeptical they're not doing at least a slightly more fundamental reworking.
Furthermore, as I've said, I think their current architecture doesn't actually scale up very well. Gen11 makes some needed changes, so it does take them in the right direction. Still, those EUs are far too narrow. It's probably going to burn a fair bit more power per FLOPS than a comparable AMD iGPU.
Edit: lest you accuse me of being some kind of Intel iGPU hater, I'll confess I actually like how they went with such a narrow design. Having 24 EU's (168 threads) in a desktop iGPU makes it more amenable to highly-parallel tasks that aren't strictly numerical. I wish this were more commonly exploited, but most people seem to be using GPU Compute for numerically-intensive (esp. floating point) applications. I'd love to see something like LLVM ported to run on Intel's current-gen iGPUs.Last edited by coder; 20 April 2019, 12:56 PM.
Comment
-
Originally posted by sandy8925 View PostObviously, to make any kind of competitive dGPU, you need to add a good amount of dedicated VRAM and a lot more shader units/execution units/cores etc. to match the competition.
Comment
-
Originally posted by coder View PostThat's the scale of time and resources you'd need to do a full re-design.
I'd say that it is the time needed to evolve their current iGPU design into something that makes sense as a standalone GPU.
They can't just pull the same iGPU modules, copy-paste them a few thousand times and add a PCIe controller (and expect any serious power out of it), but they aren't going to throw out their HD architecture either.
Comment
Comment