Announcement

Collapse
No announcement yet.

OpenGL 3.2 Specification Officially Released

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by mirv View Post
    Sorry, I was just curious - which extension exactly helps wine in the 3D area? Perhaps you mean that many of the new features available in opengl3.2 will be of benefit to wine - but the wine devs still have to incorporate the changes into wine.
    http://www.opengl.org/registry/specs...array_bgra.txt
    http://www.opengl.org/registry/specs...onventions.txt
    http://www.opengl.org/registry/specs...ing_vertex.txt
    and some others as well, http://www.g-truc.net/#news0170 summarizes that quite well. Basically really mostly for D3D compatibility purposes, but that's what Wine is about after all

    AFAIK some of them are already used within Wine, so we just need to wait for support for these extensions in the drivers (especially in the OSS area)

    Comment


    • #17
      Originally posted by NeoBrain View Post
      http://www.opengl.org/registry/specs...array_bgra.txt
      http://www.opengl.org/registry/specs...onventions.txt
      http://www.opengl.org/registry/specs...ing_vertex.txt
      and some others as well, http://www.g-truc.net/#news0170 summarizes that quite well. Basically really mostly for D3D compatibility purposes, but that's what Wine is about after all

      AFAIK some of them are already used within Wine, so we just need to wait for support for these extensions in the drivers (especially in the OSS area)
      Ah, cheers!
      Yes, this is looking to be a rather good opengl release.

      Comment


      • #18
        Originally posted by Qaridarium View Post
        OpenGL3.2=wineexstanion for amd

        wine 1.1.25 allready has the wine specifig extansion for nvidia

        OpenGL3.2 brings this nvidia-only extansion to AMD Carts!

        in fakt OpenGL3.2 is an powerup for catalyst and amd.


        not realy for nvidia becourse wine allready has the nvidia-only wine extansion.


        on openGL3.2--- amd has the chance to have the same power in WINE.
        If you consider nvidia supporting extensions early then the others as "nvidia-only" then I guess you can say pretty much every OGL release has had "nvidia-only" extensions as they are always the first to have support for upcoming releases. Once again, if you want to develop cutting edge tech you have to use the nvidia blobs and wait for everybody else to catch up (ranging from the typical 6 month wait for ATi's blobs to years after in the OSS arena).

        Comment


        • #19
          Just a small clarification: "Nvidia-only" has a specific meaning in the world of OpenGL. It refers to extensions released by Nvidia and not ratified by the ARB or other vendors. Note that this doesn't mean that other vendors cannot implement those extensions (it has been done before), just that it's unlikely (because they expose nvidia-only functionality). AMD and every other vendor can and do release extensions in the same spirit: "AMD_vertex_shader_tesselator" (awesome extension btw), "SGI_swap_control" and hundreds more.

          OpenGL 3.2 has ratified some "nvidia-only" extensions and moved them to core (which means vendors are required to implement before they can claim OpenGL 3.2 compliance) or they have made them ARB extensions (which means that multiple vendors have agreed that this is how something should work and that they will likely implement them soon).

          Wine has used some NV-only extensions to improve compatibility and/or performance with D3D. The ARB has decided that those extensions are generally useful and has promoted them.

          OpenGL 3.2 is a very exciting release because: a) it brings some very useful stuff into core (base vertex index, geometry shaders, seamless cubemaps) and b) because it shows continued dedication on the part of the ARB. The ARB still has a lot of work to dispel the fears of the past (no public communication, updates delayed for years) but they are doing a damn good job right now.

          Now we only need to make intel follow the lead of Nvidia and AMD. Their hardware maybe slow, but it's capable - it's their drivers that are really lacking in the OpenGL department.

          Comment


          • #20
            Originally posted by deanjo View Post
            If you consider nvidia supporting extensions early then the others as "nvidia-only" then I guess you can say pretty much every OGL release has had "nvidia-only" extensions as they are always the first to have support for upcoming releases. Once again, if you want to develop cutting edge tech you have to use the nvidia blobs and wait for everybody else to catch up (ranging from the typical 6 month wait for ATi's blobs to years after in the OSS arena).
            wine only make an mistake... the wine devs think that the extansions was official in the openGL ...

            thats all...

            Comment


            • #21
              Why is it a mistake to use the max out of existing hardware? ATI is free to add the same extensions.

              Comment


              • #22
                Originally posted by Qaridarium View Post
                wine only make an mistake... the wine devs think that the extansions was official in the openGL ...
                Wouldn't it be pretty hard to make that mistake, considering that vendor-specific extensions have their own prefixes?

                Comment


                • #23
                  Is there any work towards "sanitizing" OpenGL? It is very hard to implement OpenGL properly in languages like java because you can pass args to OpenGL functions that will crash your program. This sort of thing does not fly in Java. If OpenGL or its wrapper were to do proper boundary checking on all its args it would slow everything down. I wonder if there is anything that can be done.

                  Comment


                  • #24
                    OpenGL does not have to follow Java's brain damaged philosophies

                    Comment


                    • #25
                      Originally posted by RealNC View Post
                      OpenGL does not have to follow Java's brain damaged philosophies
                      Why is it brain-dead to throw an exception instead of crashing? You can recover from an exception.

                      If OpenGL were "sanitary" then it could be used in sandboxes for web applets and such. In its current state you have to wrap it up or else it is just a big security hole.

                      Comment


                      • #26
                        Originally posted by RealNC View Post
                        OpenGL does not have to follow Java's brain damaged philosophies
                        Why is it brain-dead to throw an exception instead of crashing? You can recover from an exception.

                        If OpenGL were "sanitary" then it could be used in sandboxes for web applets and such. In its current state you have to wrap it up or else it is just a big security hole.

                        It's not even just java. It is the same situation for Ruby or Perl or Scheme or python or any other interpreted language.

                        Comment


                        • #27
                          Have you ever profiled a 3-D app? Even with a fancy high-end card, your program spends the vast majority of its cycles inside of OpenGL calls. This means that you can write your app in a slow interpreted language and it will really not slow things down much at all.

                          3-D apps are all about look and it is very subjective, so you do a lot of fussing with the properties of objects to get them to look "right". This means lots of recompiling if you write in C or C++. If you work with an interpreted language you can quickly fix up the look and the action to the way you want. And THEN you can port it to C or C++ if you really want the speed.

                          "When writing a program, you should plan to throw the first version away, because you inevitably will, whether you planned to or not" - Gerald Sussman, inventor of Scheme and otherwise really smart guy.

                          If one follows his advice then one should always prototype in a nice, easy to work with language. There are enough headaches with development of new code, why give yourlself the extra burden of worrying about pointers and memory allocation when you should be focused on how your game is going to play or how your visualization is going to show the tumor cells?

                          The problem with OpenGL is, even your prototype app will dump core if you mess up, and you will still find yourself in gdb looking at stack traces even though you made an effort to keep your head out of the bits and bytes.
                          Last edited by frantaylor; 08-04-2009, 11:58 PM.

                          Comment


                          • #28
                            OpenGL is very low level for performance reasons - it doesn't suit well to make it work directly with a higher level language, and that's why wrappers / bindings exist for it. OpenGL will return error values however, and these can be checked easily enough. If the error values are not properly reported, it's not the fault of OpenGL, but rather the implementation of it.
                            Interpreted languages also do an awful lot of error checking internally, so wrappers could do that quite easily as well.

                            Comment


                            • #29
                              Originally posted by Kano View Post
                              Why is it a mistake to use the max out of existing hardware? ATI is free to add the same extensions.
                              It's not as long as you're mentally prepared to do things the Right Way (tm) and do distinct code paths for all vendors - read: Intel, nVidia, ATi, etc. (Like Microsoft's DirectX afaik does; Wine's devs seem to adamantly believe that they can do some uniform solution which is obviously wrong - from what I've heard they seem to believe it should be enough they just do the nVidia codepath and it's on the responsibility of driver vendors to port their drivers. This is very likely to give you reduced performance with non-nVidia hardware even if the driver implementation is as good as with nVidia. Of course, who wants to do three - or five if we count open ATi and nVidia drivers - times as much work?)
                              Last edited by nanonyme; 08-05-2009, 07:54 AM.

                              Comment


                              • #30
                                I am sure that will be done in time for fglrx with opengl 3.2 support. As NV provides now already those test drivers wine could adopt that new codepath and fglrx users will be happy when ATI manages to provide that too. Currently it does not matter in which way the functions are used as they run only on NV hardware.

                                Comment

                                Working...
                                X