Announcement

Collapse
No announcement yet.

Intel Hits "Almost There" GL 3.0 Support In Mesa

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by gigaplex View Post
    No, I think cl333r meant a completely different approach along the lines of voxels. What you described was tessellation, which is an enhancement that still uses triangles under the hood.
    Yes, I mean something like voxels, but more than that since I heard they have some severe limitations.

    And no, I'm using threads in Java (with Java's threads) and in C and Gtk (with pthtreads) and I didn't find it so mind-bogling difficult. I don't think it's so difficult in GL, you just have to use more brain cycles by which I mean - being more careful and writing additional code - if you think it's too much for you - then don't use GL, go with DX or create DX on Linux, whatever, but for me it's not a critical issue since I worked with threads in other environments and it wasn't a big deal. I didn't say: "Jesus Christ! Gtk is non thread safe! We need to rewrite it! OMG! I can't afford to create my threads and manage them - I'm too feeble minded for that!" - Did I ever have such an attitude? No. But why do others claim it's a critical issue with GL, not to mention that in C you actually gain speed cause you use extra CPU cores if available, while in GL/DX multi threading is using same abount of cores no matter what cause it's still being serialized under the hood.
    Last edited by cl333r; 22 December 2011, 09:56 AM.

    Comment


    • #32
      Originally posted by Temar View Post
      Why not just implement a DirectX 11 state tracker instead of inventing a new API?
      Because its from Microsoft........

      Originally posted by Wilfred View Post
      Isn't there already a d3d 11 state tracker for gallium and X? AFAIK nobody uses it.
      It is not done and incomplete .....

      Comment


      • #33
        Originally posted by Temar View Post
        Why not just implement a DirectX 11 state tracker instead of inventing a new API?
        Politics. Linux distros might start carrying it (or might not -- look at Mono), but you'll likely never see it implemented on iOS or OS X. Since game developers care FAR more about those platforms than they do about Linux, the end result will be almost no real improvement for portable graphics.

        Also, D3D is C++, and (again, mostly due to politics) that means certain OSes or developers are going to shun it even if Microsoft LGPL'd the whole stack today.

        Originally posted by cl333r
        "Proper threading" "easy threading" is a reiteration of the same issue under different names - that you can use threads in GL if you use extra brain cycles
        These are not the same. It is _impossible_ to thread GL the same way you do D3D. Impossible. Unfixable. The workarounds possible in GL are not complete and do not give you all the same features and advantages D3D11 offers. This cannot be changed without completely breaking the API and making a new one that has explicit context object handles passed to all resource API calls, which even if we called that "OpenGL" would still be an entirely new and incompatible API to what OpenGL is today.

        You _can_ thread GL, but only in a way that mandates the use of global process-wide locks and tons of explicit state resetting and management inside those locks, and hence it is not possible to get the same level of performance as D3D (and performance is the whole damn reason we want threading). Using the DSA extension (which is a mess, and just a giant pile of hacks to work around GL's dumb-as-mud API) would fix part of that, but not all of it.

        It's also worth noting that the OpenGL API has other deficiencies that make it impossible (again, literally impossible) to achieve the same level of performance as D3D. The state management APIs are one, albeit they can be _partially_ worked around with the DSA extension or a possible future OpenGL API addition. The mutable object nature is another, which is not fixable. The object naming scheme (integer ids for all objects) is another design foible that requires extra driver-side work that shouldn't be necessary in any sensibly designed API.

        I know the Linux/FOSS users are still getting hard-ons about _finally_ maybe having OpenGL 3.0, but in the rest of the graphics programming world many people were outright pissed when they initially got OpenGL 3.0 because everybody and their mother desperately wanted (and expected after promises from Khronos) to get a new object-based API, a.k.a. Longs Peak. All of the complaints with OpenGL that lead to the Longs Peak proposal are still 100% valid today. Only nobody at Khronos is even pretending to care anymore.

        Longs Peak was not simple a "make a nicer API" deal. It was actually going to fix the problems I outlined above with OpenGL that result in it being incapable of matching D3D's performance and features entirely. These articles from Khronos about the new API design spell out in better details those problems and how a completely new API could fix them: http://www.opengl.org/pipeline/article/vol002_3/ and http://www.opengl.org/pipeline/article/vol003_1/. An example of their proposed API (which I personally don't think is as nice as D3D's still, but at least would not be objectively worse like the current API): http://www.opengl.org/pipeline/article/vol003_4/.

        But hey, let's all pretend that the rest of the graphics industry and even Khronos themselves haven't directly countered your complaints about my posts. I'm sorry for posting "emotional" hate posts about OpenGL, I'm just a Microsoft fanboy, and I can't possibly be speaking from any real experience or in-depth knowledge of games programming. You win the Internet, sir. </sarcasm>

        Originally posted by cl333r
        Yes, I mean something like voxels, but more than that since I heard they have some severe limitations.
        Thank you for posting insightful commentary about graphics technologies that you've "heard about." Your knowledge and insight are invaluable to the graphics community. I'll talk to some colleagues about getting you a invitation to speak at SIGGRAPH next year about this and other exciting innovations you've seen someone post about on Reddit once. </more-sarcasm>

        Comment


        • #34
          And no, I'm using threads in Java (with Java's threads) and in C and Gtk (with pthtreads) and I didn't find it so mind-bogling difficult. I don't think it's so difficult in GL [...]
          Aha, it seems that you are not aware of how OpenGL deals with threading. Elanthis covered this in some length, but here is the breakdown of the problems:

          OpenGL rendering is routed through a global, thread-local OpenGL context (i.e. you can only use OpenGL commands on the single thread that 'owns' the context). This means you can implement multiple threads in two ways:

          (a) move tasks such as texture loading to a background thread but issue all OpenGL commands from a single thread. This works quite well and it's something every game developer worth his salt does.

          (b) create multiple OpenGL contexts, one for each thread you wish to use OpenGL on. This is were the fun starts! OpenGL contexts don't share resources by default (i.e. a texture created on context #1 is not available on context #2), making this approach pretty much useless on its own. That's not good enough, of course, so platform-vendors offer ways to share resources (wglShareLists, glXShareLists etc etc) - the problem is that the exact behavior of these functions is ill-defined. Different drivers behave differently: some drivers work; others work but only if you don't create any resources prior to calling *ShareLists (or the command fails); some others don't support context sharing at all (e.g. Intel); and a few claim to support sharing but crash or behave weirdly if you try to use it.

          What's worse is that even when a driver claims to support sharing, it may still use global locks internally, making this slower than using just a single thread. An actual instance I've encountered: thread #1 renders, thread #2 compiles a new shader in the background. Even though this is a new shader (not used anywhere yet), thread #1 is stalled while thread #2 is compiling. Not good, not good at all.

          What does D3D11 do differently? It offers a third, more efficient way to do multi-threading:

          (c) thread #1 renders, thread #2-n create queues of commands that are send to the first thread for execution, keeping all cores happy. There is no way to do this in OpenGL (the only way, display lists, was removed in GL3.1, and it wasn't flexible enough anyway).

          What's worse, this is impossible to implement as an extension of the current OpenGL semantics (mutable objects, causing all kinds of undefined behavior). It requires an actual rewrite - something that Khronos is loathe to do.

          It's sad that the only cross-platform 3d API is in such a sad state. Unless this changes in a fundamental way, this pretty much ensures Microsoft's dominance in 3d gaming (which kinda puts this "5 to 15 years until we see something different" comment into a new light).

          Comment


          • #35
            Originally posted by Sidicas View Post
            D Software wrote a game in OpenGL called "RAGE", but requires OpenGL 3.3 as a minimum. ID Software really didn't have any hopes of releasing the game for Linux any time soon because of the lack of graphics driver support (and graphics driver performance problems). It was released for Mac OS X, Windows, and the PS3 game console.. It's a GREAT game.
            Released for MacOS? Really? Where?

            So what about libs and drivers on MacOS?

            Comment

            Working...
            X