Announcement

Collapse
No announcement yet.

Woah, AMD Releases OpenGL 4.0 Linux Support!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    The beta ATI drivers are, well, beta. I get syntax error compiles on my own shaders that were previously fine, and the loadGeometry output occupies the Unigine Heaven 2.0 demo as well.
    I'll wait for the official 10.4's before giving proper bug reports to ATI however.

    Comment


    • #82
      Originally posted by Qaridarium
      LOL be sure AMD hates me!

      I'm just like a AMD-Fanboy-Terrorist.

      and no I'm not on AMD's Payroll .......

      I'm just a Little-Baby-Boy Spending his last money on AMD-Opterons and amd graphic cards...

      i really sould be on AMD's Payroll because i need 1200? for a new Opteron 6000 workstation ;-)

      PPLLEEAASSEE pay me 1200? for an Opteron 6000!

      *I work for hardware*
      I guess you spend your daddy's money to Opterons ?

      Comment


      • #83
        I'm thinking the next gen dx11 cards will be better refined and properly support OpenGL 4.0 then.

        Comment


        • #84
          Originally posted by CNCFarraday View Post
          Again, without going into details, X can't be "X" the way it is now.

          You'll have executing in an asynchronous, possibly heterogeneous architecture context:

          1. vector-graphics
          2. video decoding
          3. 3d graphics
          4. generic Open CL or whatever
          5. ?
          6. PROFIT!
          (raster ops fit in there somewhere)
          How much of that could have been done with AGP?
          (I just remember the "PCIe is unnecessary" wars )

          BTW, vanilla lucid beta fglrx works pretty much just fine for my 4870, so there's that.

          Comment


          • #85
            Originally posted by MattH View Post
            How much of that could have been done with AGP?
            (I just remember the "PCIe is unnecessary" wars )

            BTW, vanilla lucid beta fglrx works pretty much just fine for my 4870, so there's that.
            More bandwidth is always welcomed and, in my opinion, AGP was an idiocy in the grander scheme of things. PCI-Express is better only for the fact that any add-on board can be accommodated if nothing else.

            Look, the question is, in the long term, what does 'graphics card' (or dedicated graphics hardware, if you will) mean exactly? A framebuffer to blit
            arrays of pixels? A polygon cruncher?

            As much as I love Linux, the truth is that its 'multimedia' stack is broken. It is a bad design, mostly for historical reasons, but also because the domain problem is very complex and most solutions came from people or organizations that needed at most a framebuffer to blit pixels into.

            Again, without going into many details, I've seen, for example a 'graphics card' made out of four PowerPC processors (Freescale e600 dual cores at 1.x smthg), two FGPAs and a bunch of other IC, all OF THE SHELF, built 'in house' running a distributed rt kernel ( i suspect it is a heavily modified linux, but i didn't go that deep ).

            It has a dual 10GB interface because, as crazy as it sounds, IP, controllers & auxiliaries for PCIe interfacing cost an arm and a leg, and scale badly for this kind of research and prototyping, especially when there is no giant corportation pouring money on you.

            Along with 4Gb DDR2 ram, all in a <120W full load envlope, this thing may not have the raw computing power of a modern ATI or nVidia card, but it makes up in other areas. For example, the host communicates with the 'card' via a STREAMS-type protocol, effectivly, the rt distributed kernel on the graphics 'card' IS the xserver along with the graphics driver. It impplements parts of OpenGL 2.1 (mostly because this is a research prototype there is no reason not to implement 4.0 specs, for example), you don't draw pixiemaps for buttons, you use a protocol to ask it to draw abstract buttons and the 'card' takes care of it.

            All the 'graphics', raster, vector, 3D, GUI elements are IN THE CARD and you can do some out-of-this world stuff with this approach. Again, all OFF-THE-SHELF components.

            This is what graphics hardware should be. AMD, nVidia & could then build specialized processing units (more or less specialized) and use an open rt kernel (distributed, prolly) that implements *everything*. Otherwise, everybody will keep having its more or less different arch with its more or less different abis, apis, protocols and all the crap.

            Think of it as 'graphics as a service'... the XServer really IS a server on another 'machine'

            Comment


            • #86
              Originally posted by CNCFarraday View Post
              It has a dual 10GB interface because, as crazy as it sounds, IP, controllers & auxiliaries for PCIe interfacing cost an arm and a leg, and scale badly for this kind of research and prototyping, especially when there is no giant corportation pouring money on you.
              I wonder if IBand makes sense? Lower latency and up to 4x the theoretical bandwidth, not horrifically priced any more lately, at least if using SuperMicro boxes..

              (I'm thinking of the 2 system-in-1u-case systems that have IB daughtercards, the QDR 40gbps are pricey but the DDR 20gbps not so much, and both use smaller ATM-style packets IIRC)

              This also sounds like Xdmx:


              But with lots of custom goodness..

              Comment


              • #87
                Originally posted by Qaridarium
                my daddy do not have a pc... and he do not wana one.
                my daddy also do not give money to me ;-)

                i earn my own money by advice people.

                1 of my advised loses a lot of money on AMD-Shares only because i tould him the first 65nm K10 opteron was a big deal and a good cpu i tould tim he sould buy shares @12,60?!
                ok after that the amd-shares fall deep very deep to 3 dollars! means 2,x?

                ok bad exampel how to earn money...

                but he still hold the shares and amd is going up again to 9 dollars...

                and he buy a lot of amd systems ;-) to. (but no k10 only k8 and k10,5)
                Yeah, really bad example of money making.

                Comment


                • #88
                  Anyone else have major issues when you try to launch a 3d game with this opengl 4.0 preview driver using an hdmi connection? On my 5770 3d games load with the screen covered in artifacts and the game left unplayable....This isn't an issue over dvi...on a related note when I tried to test ubuntu 10.04 the hdmi connection in the default OSS driver wasn't working, just left me with a blank screen...again problem was solved my using a DVI connection

                  Comment


                  • #89
                    Originally posted by MattH View Post
                    I wonder if IBand makes sense? Lower latency and up to 4x the theoretical bandwidth, not horrifically priced any more lately, at least if using SuperMicro boxes..

                    (I'm thinking of the 2 system-in-1u-case systems that have IB daughtercards, the QDR 40gbps are pricey but the DDR 20gbps not so much, and both use smaller ATM-style packets IIRC)

                    This also sounds like Xdmx:


                    But with lots of custom goodness..
                    Well it is a 'pet project' if you will. A hardware implementation of the 'production' system which is virtual (runs atop multi-socket multi-core x86). The goals were to make it entirely with off-the-shelf components that could be interlinked, assembled, programmed and tested
                    'in-house'. They never had any project that required PCIe development kits, from what I've heard (not my domain) it could be a pain in the ass to integrate complex heterogenous execution units - don't know exactly, but there are multiple listeners ('servers') and it would have required some sort of pcie multiplexor and it was too complicated for them etc. Maybe if this is re-designed with other goals in mind and there are some people who worked with pcie interconnects it would be a breathe.

                    Not quite like XDMX. From a bird's eye view, yes, it looks like it, but actually it is a 'xserver on a board' so to speak. The entire graphics backend is delegated to another execution context, not just the blitting, poly chrunching etc. In theory, you can 'crossfire' these cards, as the rtdks can talk to other ones on different boards, all the software is there. The 'real' system (the virtual one) does this and, in principle, it could be done with the real hardware.

                    Comment


                    • #90
                      What about OpenGL 3.3 and 4.0 in official driver? When would it be possible? Or just update preview driver to support xserver 1.7.

                      Comment

                      Working...
                      X