Announcement

Collapse
No announcement yet.

Woah, AMD Releases OpenGL 4.0 Linux Support!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    it has yet to be determined if ATI's 57xx and lower series cards can actually be considered OpenGL 4.0 compliant, because the spec requires 64bit floats which is something only the dual-precision 58xx and higher cards are capable of.

    Comment


    • #42
      Originally posted by R3MF View Post
      it has yet to be determined if ATI's 57xx and lower series cards can actually be considered OpenGL 4.0 compliant, because the spec requires 64bit floats which is something only the dual-precision 58xx and higher cards are capable of.
      Allow me to allay your fears.
      http://www.geeks3d.com/20100317/rade...ations-on-gpu/

      Plus, to anyone wondering, the tessellation unit in the r6xx derivatives will not be good enough for GL_ARB_tessellation_shader. It uses a completly different tesselation algoritm (odd rather than even of DX11) and is in the wrong part of the pipeline. The GL_AMD_vertex_shader_tessellator also didn't expose all functionality expose in their DX driver. It lacks the rather crucial ability to set a per edge tesselation factor.

      Comment


      • #43
        Originally posted by koenvdd View Post
        Maybe Valve move to OSX and openGL is putting some fire under ATI's openGL team. Valve has always been buddies with ATI.
        ATI's Mac OS X drivers are "firegl-class" and professional OpenGL support is excellent. I doubt that the ATI OpenGL team is stroking the weed to much.

        As I've said in another post, the problem revolves around Linux' video/3D model and the 'diversity' of the system. DRI will take care of a lot of these problems along with the X-Server rework.

        If you ask me, with all the diversity and forks in the Linux/OSS world, one wonders why there is only "one" X-Server... The problem is the video drivers, the graphics server and the gpu offloading ( umbrella term for delegating executable code either for 3d or parallel computing to another execution unit ) are tightly coupled in a very unfortunate way. From my experience, in the long term this is the problem. X-Server & co were designed to work with a 'framebuffer'. Now we are in a position where video/3d/specialized computation are literally offloaded into another address space, on another execution unit with an alien architecture from the execution unit the kernel & user land works in.

        I can't go into details because of NDA and the like, but we have a "video card" ( we nicked them Voodoo X ) here that is a quad socket quad core opteron machine, with 2 video cards ( firegl or quadro ) connected to the rendering controller by a quad 10GBASE-SR ( 2 direct, 2 indirect, through a controlled/distributor) interface. We have our own linux kernel stacks on both ends and even a modified X ( for internal use only ) and it shows badly that the whole house was designed in a time when graphic cards sat in ISA slots and top dog was an ATi made in Canada with 512k ( half-a-meg! ) which I still have at home, bolted on a wall...

        If it were up to me, I'd make OpenGL mandatory for X-Server, make the whole "widget"-drawing SVG based and offload it to the GPU, build graphics libraries that GTK & co depedend on top of it and add the "compositor" for video and 3d (also delegated to the GPU) in the kernel maybe separated into kernel mode "drivers" for 2d (svg/raster+comp) 3d and video and one ore more user space compositors into the X-Server. Let the "master" compositors from user space delegate requests to a multitude of kernel space compositors. That way makes it easy at a higher level of abstraction to turn off physical cards, move execution from a dedicated card to an on-chip one for power saving and such or shuffle execution from one unit to another, for example when the more powerful unit is jacking off rendering SVG icons while a HD video stream is pending decoding and you move the light load to the IGP and let the main GPU decode video without "context switching" (so to speak) which is problematic.

        Oh crap, I've already gave up too many details...

        Comment


        • #44
          Originally posted by LinuxID10T View Post
          I tried it with Nexuiz, and my frame rate went from 18 to 30 FPS at max settings. Everything seems quite a bit speedier.
          I have the same issue. Tested on hon and urt.

          Comment


          • #45
            Originally posted by brouhaha View Post
            As I understand it, AMD's plan is to release 1.7 compatible drivers as soon as all of the major distributions have upgraded to 1.8.
            made my day

            On the other hand, that leaves testers for the open source drivers who keep up with the xorg-server version...

            Comment


            • #46
              Originally posted by CNCFarraday View Post
              ATI's Mac OS X drivers are "firegl-class" and professional OpenGL support is excellent. I doubt that the ATI OpenGL team is stroking the weed to much.

              As I've said in another post, the problem revolves around Linux' video/3D model and the 'diversity' of the system. DRI will take care of a lot of these problems along with the X-Server rework.

              If you ask me, with all the diversity and forks in the Linux/OSS world, one wonders why there is only "one" X-Server... The problem is the video drivers, the graphics server and the gpu offloading ( umbrella term for delegating executable code either for 3d or parallel computing to another execution unit ) are tightly coupled in a very unfortunate way. From my experience, in the long term this is the problem. X-Server & co were designed to work with a 'framebuffer'. Now we are in a position where video/3d/specialized computation are literally offloaded into another address space, on another execution unit with an alien architecture from the execution unit the kernel & user land works in.

              I can't go into details because of NDA and the like, but we have a "video card" ( we nicked them Voodoo X ) here that is a quad socket quad core opteron machine, with 2 video cards ( firegl or quadro ) connected to the rendering controller by a quad 10GBASE-SR ( 2 direct, 2 indirect, through a controlled/distributor) interface. We have our own linux kernel stacks on both ends and even a modified X ( for internal use only ) and it shows badly that the whole house was designed in a time when graphic cards sat in ISA slots and top dog was an ATi made in Canada with 512k ( half-a-meg! ) which I still have at home, bolted on a wall...

              If it were up to me, I'd make OpenGL mandatory for X-Server, make the whole "widget"-drawing SVG based and offload it to the GPU, build graphics libraries that GTK & co depedend on top of it and add the "compositor" for video and 3d (also delegated to the GPU) in the kernel maybe separated into kernel mode "drivers" for 2d (svg/raster+comp) 3d and video and one ore more user space compositors into the X-Server. Let the "master" compositors from user space delegate requests to a multitude of kernel space compositors. That way makes it easy at a higher level of abstraction to turn off physical cards, move execution from a dedicated card to an on-chip one for power saving and such or shuffle execution from one unit to another, for example when the more powerful unit is jacking off rendering SVG icons while a HD video stream is pending decoding and you move the light load to the IGP and let the main GPU decode video without "context switching" (so to speak) which is problematic.

              Oh crap, I've already gave up too many details...
              Patches, please

              How many lines of code was included in that paragraph? 1 000 000?

              Of course that is the end vision but there is a lot work to do. Also means to get there difference from your vision.

              PSST. You don't win anything in making OGL mandatory instead you lose ability to run in less common hw without GL support In fact X should be just one of DRI clients if we have GL capable driver with drm modules.

              Comment


              • #47
                Originally posted by CNCFarraday View Post
                ATI's Mac OS X drivers are "firegl-class" and professional OpenGL support is excellent. I doubt that the ATI OpenGL team is stroking the weed to much.

                *snip*

                Oh crap, I've already gave up too many details...
                Hey cool, this yedi mind trick works.

                Also I was operating under the apprehension that MacOSX's openGL was a bit wonky because Apple wanted to do it themselves instead of ICDs. Of course I could be way of there. Maybe I was thinking of something else.

                Comment


                • #48
                  Originally posted by R3MF View Post
                  it has yet to be determined if ATI's 57xx and lower series cards can actually be considered OpenGL 4.0 compliant, because the spec requires 64bit floats which is something only the dual-precision 58xx and higher cards are capable of.
                  Thanks to the one minute edit system I'll have to make a second post.
                  While I'm positive that I replied to this, I can't seem to find it.
                  Anyway.

                  Allow me to allay your fears.
                  http://www.geeks3d.com/20100317/rade...ations-on-gpu/

                  Comment


                  • #49
                    Again, without going into details, X can't be "X" the way it is now.

                    You'll have executing in an asynchronous, possibly heterogeneous architecture context:

                    1. vector-graphics
                    2. video decoding
                    3. 3d graphics
                    4. generic Open CL or whatever
                    5. ?
                    6. PROFIT!
                    (raster ops fit in there somewhere)

                    Seriously, now, all these things will compete for GPU time on one or more execution units with their own IOMMU, address space & the whole deal, especially if you have something exotic like nvidia IGP and ATI dedicated GPU ( and, to be wicked, something like that Leadtek PciEx card with a Cell chip on it ). These have nothing to do with "GUI" or video. They represent "threads" of execution that delegated to an external processing unit because its arch is better optimized than the "generic" CPU that, presumably, runs the kernel. If Open CL becomes a standard like OpenGL there is no reason why IBM couldn't make a Cell-based OpenCL card, or ARM for that matter (or Intel... he he he).

                    You can even go medieval and represent the "display" ( multiple ones ) and the "objects" (abstractions) living on it as a virtual fs a-la procfs and. "Oh my god, it's full of splines!" He he he... we have really cool toys...

                    Yes, it is A LOT to take into account, but, in the long run, it is better. Otherwise it will just delay the inevitable.

                    Comment


                    • #50
                      Originally posted by koenvdd View Post
                      Also I was operating under the apprehension that MacOSX's openGL was a bit wonky because Apple wanted to do it themselves instead of ICDs. Of course I could be way of there. Maybe I was thinking of something else.
                      There's the problem of how user-space applications use OpenGL, especially through the OSX's "compositor". The OpenGL driver itself is ok.

                      Comment

                      Working...
                      X