Announcement

Collapse
No announcement yet.

OpenGL 3.1 Not Likely In Mesa Until 2013

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Kivada
    replied
    Originally posted by olbi View Post
    I can't understand it. Mesa has so great support from Intel, Red Hat and others big giants of IT, and they couldn't implement 3 years old specs, where nVidia and AMD could do it in so short time. What is the main reason? No so much ppl or money for work?
    Allot of the problem is the fact that the hardware itself lagged years behind the spec, take a look, Intel GPUs didn't even support OpenGL3 till the Sandy Bridge era: https://en.wikipedia.org/wiki/Compar...ocessing_units

    The new HD Graphics 2500 and HD Graphics 4000 GPUs can now handle OpenGL4, but there is no reason to implement it on them as they are far too slow to handle what most games are doing these days

    Intel's team seems to be one of the major driving forces behind MESA's OpenGL code as AMD's team is too small devote the manpower to add to the spec seeing as they have such a large back catalog of GPU hardware to get working on the existing MESA code before they can even think about adding advanced features to the OpenGL stack. You've gotta get the hardware running in the first place before you can make it stable, fast and add features else it becomes a half assed monstrosity.

    Leave a comment:


  • glisse
    replied
    Originally posted by smorovic View Post
    I believe there is a lot of optimized third party algorithtms/code that are licensed for closed source graphic stacks and this is why those perform better. This is probably also a reason why all those mobile drivers are closed - it's simply not possible to release that code without violating the NDA, and maybe they even don't get the source itself, just the precompiled library or obfuscated IR that is compiled for the target platform. Probably exception is Nvidia and ATI who developed their stacks earlier than the competition, but then they have a lot to hide. Protecting their driver code means protecting their most valuable IP (mean easiest to copy), and probably saves them from occassional patent lawsuit.

    Mesa OTOH, suffers also from lack of manpower. It doesn't help that a lot of effort is split between the classic Intel and Gallium for AMD/Nouveau. I guess Intel doesn't want to spend time rewriting and reoptimizing all their code (if they don't foresee immediate gain) and also they'd be doing free work for the competition, since gallium tries to share code between the drivers. Third, it might not even be optimal frasmework for their driver, whereas Intel devs have free hands with the classic driver to tune it to their needs and hardware.
    I don't get why people think there is some secret sauce in the proprietary code, there isn't. It's just optimization over and over and over. It's matter of number of people couple dozen in open source for all GPU (AMD, Intel, NVidia) while for closed driver it's several hundred for each GPU. No big secret, just lot of man time spent optimizing each possible use case and sometime doing specific codepath for specific application.

    Leave a comment:


  • smorovic
    replied
    My theory

    I believe there is a lot of optimized third party algorithtms/code that are licensed for closed source graphic stacks and this is why those perform better. This is probably also a reason why all those mobile drivers are closed - it's simply not possible to release that code without violating the NDA, and maybe they even don't get the source itself, just the precompiled library or obfuscated IR that is compiled for the target platform. Probably exception is Nvidia and ATI who developed their stacks earlier than the competition, but then they have a lot to hide. Protecting their driver code means protecting their most valuable IP (mean easiest to copy), and probably saves them from occassional patent lawsuit.

    Mesa OTOH, suffers also from lack of manpower. It doesn't help that a lot of effort is split between the classic Intel and Gallium for AMD/Nouveau. I guess Intel doesn't want to spend time rewriting and reoptimizing all their code (if they don't foresee immediate gain) and also they'd be doing free work for the competition, since gallium tries to share code between the drivers. Third, it might not even be optimal frasmework for their driver, whereas Intel devs have free hands with the classic driver to tune it to their needs and hardware.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by Nedanfor View Post
    AFAIK Intel has no plan to make high end GPUs so even if Haswell would support OGL 4.x it wouldn't be so useful.
    Ivy Bridge can already support the full GL 4 API, the fact that they don't is 100% based on their driver teams not finishing the software to do so. The hardware is already there.

    Whether the hardware is fast enough to really support applications that would try to use that support is a different matter - perhaps Haswell will be there.

    Leave a comment:


  • Nedanfor
    replied
    Originally posted by darkbasic View Post
    Support in mesa means drivers will support it in ~6months.
    And means this would take time that can be useful to other things.

    Originally posted by darkbasic View Post
    Done (Intel)
    Yes but r600g/nouveau need improvements in power management.

    Originally posted by darkbasic View Post
    WIP (Intel)
    Well, Sandy/Ivy Bridge performance on Linux is about 30-50% of SB/IB on Windows... And those GPUs aren't good for gaming on Windows.

    Originally posted by darkbasic View Post
    WIP
    Yes and I think OCL is more interesting than OGL 3.2/4.x.

    Originally posted by darkbasic View Post
    Done (Intel)
    As far as I know only with dedicated hardware. Gallium3D offers extensible (but less efficient) hw decoding.

    Originally posted by darkbasic View Post
    But we need 3.2
    I think we don't need it right now.

    Originally posted by darkbasic View Post
    They are already working on Haswell...
    AFAIK Intel has no plan to make high end GPUs so even if Haswell would support OGL 4.x it wouldn't be so useful.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by Qaridarium
    the catalyst run do have more tests ?
    Piglit automatically skips tests that drivers report they don't support - so having GL 4.2 support and more extensions than the OSS drivers mean that it can run more tests.

    Leave a comment:


  • airlied
    replied
    Originally posted by Qaridarium
    (edit) ubs,,, were is the comparison to the r600 driver?
    oh I was lzy and left it as an exercise for the reader, though memory seems to point at around 7100/7600 or something like that.

    Leave a comment:


  • kraftman
    replied
    Originally posted by elanthis View Post
    FOSS projects do not have a lot of developers. The idea that there are millions of people willing to contribute to FOSS may be true in the broadest sense, but it's certainly not true when you narrow things down to the actually important, useful, and/or relevant FOSS projects.

    If you add up all of the people actively working on Mesa, GNOME, the desktop-related bits of the kernel, X11/Wayland, glibc and the rest of the core GNU system, and so on, you will still not have as many people as Microsoft has test engineers. The last goofy little 6-month 2D game project I worked on had a larger team than all of Mesa has.
    When you will put Linux kernel to that list the situation is much different. Linux growth rate exceeded Windows NT in 1999. Linux had many features introduced before Windows. It seems Windows can only match in things like Mesa which aren't so important, because we have proprietary graphic drivers.

    Leave a comment:


  • 89c51
    replied
    Originally posted by darkbasic View Post
    They are already working on Haswell...
    our only hope for OpenGL4 + is Intel developing their own hi end GPUS (comparable to nVidia and AMD)

    and then again they don't use G3D

    Leave a comment:


  • Ansla
    replied
    Originally posted by airlied View Post
    Its not as is Red Hat can ship GL3.0 drivers anyways so working on them isn't a great spend of our time. and working on GL doesn't get you CL or video decode or anything. Also it not as if Red Hat can ship video decoders, so again no reason for us to invest heavily in them.
    True, what I expect from Red Hat or any other commercial distribution that is mostly popular on servers is to only care about OpenGL up to the level it can run a composited desktop with decent performance. What I would expect though is to care about power management, but probably what customers you have with AMD graphics cards have embedded graphics that don't use that much power to matter.

    Originally posted by airlied View Post
    Perhaps some of the distros that do ignore patents could invest more in these.
    If a distribution makes money out of selling their product it will care about patents. If it doesn't make money it doesn't have what to invest, so if a mesa developer is also a Gentoo or Arch developer that would be a pure coincidence, and I don't know of other distros enabling patented code by default.

    Leave a comment:

Working...
X