Announcement

Collapse
No announcement yet.

Intel Hits "Almost There" GL 3.0 Support In Mesa

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Hits "Almost There" GL 3.0 Support In Mesa

    Phoronix: Intel Hits "Almost There" GL 3.0 Support In Mesa

    There's just a week and a half left to the year, but will Intel be successful in their goal of open-source OpenGL 3.0 support in Mesa for their Sandy/Ivy Bridge hardware in 2011? It looks like they will fall just short...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    As I understand it all the real games programmers want opengl 4.x. So we still have to wait some longer, but it is really nice to see this progress.

    Comment


    • #3
      Big congrats and thanks to everyone involved!
      (Although I'll repeat myself after the commit that actually enables it...)

      The really good part seems to be that we are not just close to gl 3.0 but also 3.3! So I hope that version numbers will be increased in the subsequent releases. However, I have to admit that I have no idea whatsoever about how difficult newer glsl versions will be. 1.3 seemed like a huge pain...

      Comment


      • #4
        Originally posted by Wilfred View Post
        As I understand it all the real games programmers want opengl 4.x.
        Thanks - this is actually a very interesting questions. I am actually unaware of any applications or games which require even gl 3.0, not speaking about gl 4.0 on Linux. So it is a bit of chicken-vs-egg problem - I don't know if such applications do not exist because nobody writing games for Linux needs gl 3.0+-specific extensions; or such developers are not writing games for Linux because it lacks such extensions in general drivers.

        Or maybe GL 2.0+ is just enough for pretty much everyone those days .

        I personally think that GL 3.0/4.0-focused games and applications will start to appear on Linux as soon as Mesa will get to support GL 3.0 on its own. It will be really interesting time by then.

        Comment


        • #5
          Originally posted by phoronix View Post
          Phoronix: Intel Hits "Almost There" GL 3.0 Support In Mesa...
          ...at 1 fps Seriously though. IMHO Mesa's biggest problem isn't compliance to some GL standard, but lack of performance is.

          Comment


          • #6
            Originally posted by AnonymousCoward View Post
            ...at 1 fps
            Actually, I'd say that it runs at about 2 orders of magnitude faster than that, on some hardware which is going to be released in a couple of months, but I won't give away any additional details .

            Comment


            • #7
              Originally posted by AnonymousCoward View Post
              ...at 1 fps Seriously though. IMHO Mesa's biggest problem isn't compliance to some GL standard, but lack of performance is.
              Well, I'm of the opinion that computer software design should be as follows: Make it right, then make it fast.

              As long as we have the functional plumbing to get GL 3.0+ to render correctly, we can spend time optimizing it to be faster. What point is there to making a broken implementation run faster if all you're doing is displaying garbage output?

              Comment


              • #8
                Originally posted by Wilfred View Post
                As I understand it all the real games programmers want opengl 4.x. So we still have to wait some longer, but it is really nice to see this progress.
                Not really. We want OpenGL 3.3. 4.x gives us a small handful of features at the expense of only supporting a much smaller subset of the existing consumer GPUs in use. The two features that anyone even really cares about have been commonly available as 2.1+-compatible extensions in the proprietary GL drivers for a couple years now.

                Direct3D11 is seeing much more uptake and interest because it -- unlike anything OpenGL ever has or likely ever will do -- offers very real API improvements that make a huge difference. One of those features is compute shaders, which in Khronos land are a separate API (OpenCL). The other is kick-ass threading support, which is impossible to ever offer in an equivalent way in OpenGL without completely rewriting the API.

                The threading is the big issue with OpenGL, aside from just being a generally horrible API. OpenGL requires and depends on magic global variables for state, while D3D uses explicit object handles. With OpenGL, the only way to handle threading at all is to put all of your graphics code into critical sections, use some crazy code to switch the active context objects in certain cases, and then pray the driver handles it correctly which it isn't required to do. It's literally impossible to do efficient threaded GPU resource creation or drawing command queue generation in GL. D3D11 on the other hand allows you to create multiple context objects attached to the same device object, and those context objects are directly part of the API calls for resource management rather than being implicit global state across the entire process like GL, so each thread can manage independent resources with no application-side locking overhead or complexity. It also allows for the creation of complete command queues (more efficient and flexible than what the deprecated display lists API in GL does) in those threads, which can then be efficiently submitted by the main thread for actual rendering. Without that, threaded "rendering" is basically limited to occlusion/frustrum culling of objects and nothing else. With it, all the complex state management and rendering setup for drawn objects can also be distributed to the threads and handled in the same pass as the culling, in addition to the generation or modification of any streaming buffer state those objects need (like the uniform buffer objects you'd want to fill up with bone transformation information for animated characters, or particle state updates for physics-using particle engines that can't be done entirely on the GPU).

                GL has long just been a crappy API, but it's gotten to the point now where that API is actually making it impossible to maintain feature parity with D3D. It needs to be rewritten. GL5 really just needs to break back-compat and be a whole new object-based API. That unfortunately is not likely to happen, and we are unlikely to see a non-Microsoft graphics API that can truly compete with D3D until Apple, Google, and/or some other companies with a stake in graphics APIs gets fed up with Khronos' uselessness and pushes a competing open API standard. Or the FOSS folks push something, which Gallium is making very easy to do: there's not much stopping someone from writing a new API as a state tracker for Gallium other than a lack of will. Making such a working API and showing that it's efficient and easy to use would be a good way to get the graphics hardware vendors and "alternative" OS vendors to pay attention and get on board, too, I would think.

                Comment


                • #9
                  Thanks for this very interesting post.

                  Originally posted by elanthis View Post
                  GL has long just been a crappy API, but it's gotten to the point now where that API is actually making it impossible to maintain feature parity with D3D. It needs to be rewritten. GL5 really just needs to break back-compat and be a whole new object-based API. That unfortunately is not likely to happen, and we are unlikely to see a non-Microsoft graphics API that can truly compete with D3D until Apple, Google, and/or some other companies with a stake in graphics APIs gets fed up with Khronos' uselessness and pushes a competing open API standard. Or the FOSS folks push something, which Gallium is making very easy to do: there's not much stopping someone from writing a new API as a state tracker for Gallium other than a lack of will. Making such a working API and showing that it's efficient and easy to use would be a good way to get the graphics hardware vendors and "alternative" OS vendors to pay attention and get on board, too, I would think.
                  Why not just implement a DirectX 11 state tracker instead of inventing a new API? You will probably hit the patent problem anyway, no matter if you invent a new API or implement an existing one. The advantage of implementing D3D11 would be that it is well documented and developers are already familiar with it. Also it would be much easier to play games on Linux if Wine could use a native D3D implementation instead of having to translate everything to GL.

                  Comment


                  • #10
                    Originally posted by Temar View Post
                    Thanks for this very interesting post.


                    Why not just implement a DirectX 11 state tracker instead of inventing a new API? You will probably hit the patent problem anyway, no matter if you invent a new API or implement an existing one. The advantage of implementing D3D11 would be that it is well documented and developers are already familiar with it. Also it would be much easier to play games on Linux if Wine could use a native D3D implementation instead of having to translate everything to GL.
                    Isn't there already a d3d 11 state tracker for gallium and X? AFAIK nobody uses it.

                    Comment

                    Working...
                    X