Announcement

Collapse
No announcement yet.

Intel Confirms Their Discrete GPU Plans For 2020

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • kneekoo
    replied
    Originally posted by devius View Post
    The i740 wasn't as much a failure as people make it out to be. It was actually quite a decent video card for its time with very good image quality compared to some of its competition. It didn't sell all that well, but performance was on par with the Riva 128 and it was cheap. The only problem was that it didn't stand out, so there wasn't any real reason for people to go for one instead of the more popular choices.
    True, I still have an i740 and although I enjoyed its performance, it was nothing outstanding. So I liked the card and I was especially happy with the price. Also, it was quite something back in those days to have an (almost) all-Intel PC (chipset, CPU, GPU). Only my sound card was an ESS. :P

    Leave a comment:


  • MartinN
    replied
    The way I read this is, AMD is gonna make us a killer GPU and we'll just OEM the crap out of it. NVidia's going to suffer.

    Leave a comment:


  • audir8
    replied
    Intel has the knowledge learned from Larrabee, AVX 1/2/512, Movidius, the opensource ispc compiler, Vulkan/OpenCL support on IGPUs, and is taking the threat from Nvidia very seriously. Will they actually ship something good that's not x86? Hopefully they realize they really need to, and it doesn't become Itanic 2.

    Leave a comment:


  • duby229
    replied
    Originally posted by starshipeleven View Post
    Did you even read the man's post? If you knew anything about GPUs you would agree, you clearly don't.

    Main reason Intel GPUs are weak is because they don't have that much "execution units", which is done because they need to leave most of the chip's TDP to the CPU component. Seriously, the iGPU is using less than 10 W now, if they can keep adding them up to reaching 70 or even 160w it would change a lot.
    Another reason is because they share system memory which is kinda meh for a GPU (dedicated GPUs use GDDR for a reason).

    If you place "a huge bunch" of them and add "a few gigs" of GDDR then yeah, it would "produce something nice". It won't be top-tier, but they can go at least somewhere into midrange.
    More proof you don't know what the fuck you're talking about. If you look at what those execution units really are and how much fixed funtion hardware it really is.... Adding more of them aint gonna do much....

    EDIT: I'd bet my last dollar if they use their existing architecture it's gonna flop in the discrete markets. But I highly doubt they can. It's almost certainly gonna have to be a new architecture, their current one is -designed- to fill their fixed function hardware. It's not even a tenth as programmable as any other current architecture. It came straight out of the early 90's and it's still there....
    Last edited by duby229; 14 June 2018, 09:52 AM.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by pegasus View Post
    Frankly, what pixel shuffling silicon need these days is less power usage and Intel is great here. For example, they can squeeze a gpu that can drive two 4k displays plus 4 cpu cores into 6W TDP product. I'm looking forward to times when these kind of products will also be able to run some graphics stuff, not just a bunch of terminals
    The main point here is that actually driving a 2D screen (even with 3D-accelerated GUI) isn't terribly complex if you don't have mobile device power ceilings (i.e. if you MUST keep the whole SoC's TDP within 1-2W, modem included).

    The issue is in actually providing the 3D acceleration for rendering a game or anything on so large screens.

    Leave a comment:


  • jo-erlend
    replied
    Originally posted by PackRat View Post

    Amd radeons do have some pro features like 10 bit color that would need a monitor that supports it and gpu pass-through with Nvidia would need a quadro. Nvidia's evil proprietary driver will block gpu pass-through with a geforce card.

    I will assume here that Intel will have gpu pass-through that is not blocked like Nvidia's.
    Intel already has much of that in kernel, although as I understand it, it will only be supported for Core gen5-7, but not 8-9 and then support will be added again for gen10. I don't know why that is, but I speculate it might be because they want to enter with a bang once their dGPU hits the scene.

    It would be _very_ strange if their dGPU didn't support GVT-d, GVT-s and GVT-g.

    Leave a comment:


  • PackRat
    replied
    Originally posted by leipero View Post
    PackRatI was refering to "desktop" market oriented GPU's, not "pro",.
    Amd radeons do have some pro features like 10 bit color that would need a monitor that supports it and gpu pass-through with Nvidia would need a quadro. Nvidia's evil proprietary driver will block gpu pass-through with a geforce card.

    I will assume here that Intel will have gpu pass-through that is not blocked like Nvidia's.

    Leave a comment:


  • pegasus
    replied
    Frankly, what pixel shuffling silicon need these days is less power usage and Intel is great here. For example, they can squeeze a gpu that can drive two 4k displays plus 4 cpu cores into 6W TDP product. I'm looking forward to times when these kind of products will also be able to run some graphics stuff, not just a bunch of terminals

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by duby229 View Post
    Did you even read the man's post? If you knew Intel's architecture you would agree, you clearly don't.
    Did you even read the man's post? If you knew anything about GPUs you would agree, you clearly don't.

    Main reason Intel GPUs are weak is because they don't have that much "execution units", which is done because they need to leave most of the chip's TDP to the CPU component. Seriously, the iGPU is using less than 10 W now, if they can keep adding them up to reaching 70 or even 160w it would change a lot.
    Another reason is because they share system memory which is kinda meh for a GPU (dedicated GPUs use GDDR for a reason).

    If you place "a huge bunch" of them and add "a few gigs" of GDDR then yeah, it would "produce something nice". It won't be top-tier, but they can go at least somewhere into midrange.

    Leave a comment:


  • duby229
    replied
    Originally posted by starshipeleven View Post
    Nah, I could really use some sub-100$ GPUs to drive multiple screens so I could say "fuck it" to NVIDIA's murderously overpriced multihead cards for workstations.
    Did you even read the man's post? If you knew Intel's architecture you would agree, you clearly don't.

    Leave a comment:

Working...
X