Announcement

Collapse
No announcement yet.

Port fglrx openGL stack to Gallium3D

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Port fglrx openGL stack to Gallium3D

    With OSS ATI drivers moving to the Gallium3D architecture in the medium/long term, could the fglrx openGL/openCL stack be ported over to become a Gallium3D state tracker and released as a BLOB?

    While this would certainly require a large initial cost in time and resources, in the long run would have a lot of benefits. Firstly it would help reduce the massive duplication of code in having two drivers. It would also allow for the proprietary code to have a stable ABI to work with as oppose to constantly chasing the kernel and xorg unstable ABIs. This in turn could also enable AMD to concentrate development on the OSS drivers. In addition, having a advanced openGL state tracker would be a big benefit to other gallium drivers such as nouveau.

    So what do you think? Could we build the cathedral on top of the bazaar?

  • #2
    Fglrx already uses a framework similar to G3D internally. They wouldn't gain much from switching over. They could re-use G3Ds state trackers... but they already have their own, so why replace them?

    Switching to G3D might make sense if AMD could use G3D for both their linux and their windows drivers. But I doubt that's going to happen soon, G3D isn't mature enough yet. But even if it was, I'm sceptical that the initial costs would ever pay off.

    Besides, G3D doesn't magically solve the problem of kernel and xorg ABIs.


    There's quite a few people at AMD that know way more about their drivers, their customers and their markets than we do. If it would solve their problems, they'd already be doing it

    Comment


    • #3
      Fglrx already uses a framework similar to G3D internally. They wouldn't gain much from switching over. They could re-use G3Ds state trackers... but they already have their own, so why replace them?
      You have what I mean backwards, port the openGL part to G3D and drop the hardware specific part. So the new graphics stack would have fglrx openGL state tracker and OSS G3D driver

      Fglrx already uses a framework similar to G3D internally
      If they already use something similar to G3D to separate the cross-platform code, then I could see that as being an issue as it would interfere with the Windows and OSX drivers. Having G3D drivers across all platforms would be really cool but I doubt AMD would have the much of a reason to do it in the foreseeable future.

      But I doubt that's going to happen soon, G3D isn't mature enough yet
      Im thinking at least a year or two down the line considering the current G3D drivers are still quite early in development anyway

      Besides, G3D doesn't magically solve the problem of kernel and xorg ABIs
      If the fglrx devs were only working on the state tracker they would only have to deal with the G3D ABI.

      There's quite a few people at AMD that know way more about their drivers, their customers and their markets than we do. If it would solve their problems, they'd already be doing it
      Didn't mean to come across as arrogant or anything, your entirely correct in what you say. I'm just curious from a hypothetical perspective if the above approach would be feasible.

      Comment


      • #4
        Originally posted by cobalt View Post
        You have what I mean backwards, port the openGL part to G3D and drop the hardware specific part. So the new graphics stack would have fglrx openGL state tracker and OSS G3D driver
        G3D already has a functional OpenGL 2.x state tracker, and will hopefully soon have OpenGL 3.x and OpenCL 1.0 state trackers. What is missing, assuming you are using a r600|r700|evergreen variant, is the G3D hardware driver. I am under the impression that this work has already started, but is living in a separate branch that is not ready for end users. If you are however using a r300|r400|r500 variant, the hardware driver sits in the mainline code, and is maturing rapidly.

        AMD has stated multiple times on these forums that they have no intention of releasing the source code for FGLRX due to legal issues related to DRM. They have also stated that they have no intention of releasing a binary hardware driver for G3D, preferring a FOSS solution instead.

        If you are using a r600|r700|evergreen based card, your best bet is to use either FGLRX, or the classic Mesa driver for your 3D needs. FGLRX now supports OpenGL 3.3/4.0 if you want the most out of your card. The classic Mesa driver only supports OpenGL 2.0, but is an excellent option if you require a FOSS solution.

        If you are using a r300|r400|r500 based card, your best bet is to use either the classic Mesa driver or the G3D driver. Both of these drivers are fairly mature, and should provide the majority of the features that you will need. If you are running a supported kernel and xorg, you can try running the last version of FGLRX that supported these chips, 9.3.

        Comment


        • #5
          Originally posted by cobalt View Post
          With OSS ATI drivers moving to the Gallium3D architecture in the medium/long term, could the fglrx openGL/openCL stack be ported over to become a Gallium3D state tracker and released as a BLOB?
          I don't see how this could ever work. The Gallium3D driver interface is a pretty fast moving beast, with API changes every week (one was just 15 hours ago), that means everything proprietary relying on this interface would need to be fixed too so that it works with the latest version. A non-blob might work though.

          A much better plan would be to replace Gallium3D with the fglrx driver stack entirely, but that's just as crazy as the former idea.

          Comment


          • #6
            Yeah, I don't see how this would work either. We already have a hardware layer abstraction that supports OpenGL 4.0, OpenCL etc., and have years of performance tuning invested in it. As marek said, the kernel and xorg APIs are probably always going to be a lot more stable than the Gallium3D API, for the simple reason that new userspace APIs are going to come along more often than the kernel and/or X see major design changes.

            I'm not sure how having a second state tracker (fglrx + mesa) would help either.

            In many ways running the fglrx 3D userspace driver over the open source kernel driver would be less work *and* more useful. Even that would be a *lot* of work, however, since the memory management abstractions are quite different.

            I guess the key point here is that the *current* Gallium3D implementation (and API) is probably going to need significant changes before it can support GL 4.0 and OpenCL. That doesn't mean it's not still a Good Thing, just that it's not likely to be the free (or at least cheap) lunch you might hope for.

            Comment


            • #7
              I guess the key point here is that the developers are saying "the cool new stuff users want to see is going to come after we move to Gallium3D", but sometimes that is being interpreted as "Gallium3D gives us all this stuff for free", rather than "the developers generally think the Gallium3D architecture is the right place to invest their time, so they're going to try to move to Gallium3D early rather than adding a bunch of features to the classic HW driver and *then* moving to Gallium3D and abandoning the code they just enhanced.

              Sorry about the long sentence

              We are doing new HW support on the classic HW driver model but that doesn't require a big investment in *improving* the driver code, just adding new paths to the existing code.

              Comment


              • #8
                Originally posted by bridgman View Post
                the *current* Gallium3D implementation (and API) is probably going to need significant changes before it can support GL 4.0 and OpenCL.
                this dosn't sound so good... but yes... do it Faster a bulled can fly

                Comment


                • #9
                  i dont wanna be rude or anything but i think port fglrx 3d part to g3d even will be a pain in the ass for both parties. lets face it fglrx 3d stack is very far from been optimal to begin with, let's no talk about stability and compatibility(i know is shared code and some ppl work hard to make it better, but well). what possible benefit you can get from that stack? and how many and i mean time of bugfixing you think will require from part of AMD to polish that mess, if even today in their own stack is that bugged (i know wine issues are both parties faults and using very old kernel/xorg helps too, and passing a number of parameters to the kernel boot process you can even get something that works semi decently, but the point is im not supposed to do all that in a commertial product, especially since i payed 600$ for my gfx's).

                  now on the other hand the OSS 3d stack is getting very rock solid and stable and everyday you discover something new working very cool, really im not having any issues in anything using mesa since really long time ago.

                  lol, im using the latest git code around, the latest kernel code available, lol even the unstable kms/drm branches in all my amd gfx pc and amazingly i never have more than 1% or less of the issues i've been having with the latest fglrx, that tell you something you know. i know some stuff is missing and some native games dont work just yet, well hon and nexuiz are actually quite good in mesa 7.9 but well you get the idea. actually for average joe desktop use the oss driver is ligth years superior to fglrx already (i meant fast 2d render, web browsing, basic 3d stuff, some games like nexuiz or hon(playable not fps king) and an excelent compiz/kwin composite experience).

                  now i agree OSS is in heavy developemnt and g3d is wip so we dont have a fully featured stable driver to compare with, but the solution is not port fglrx mess to oss, the solution is give time to the OSS new stack to get feature parity (not that far away, mesa 7.9 is getting really close to support GL 2.1, which basically what you need for everything for now) and mesa 8.0 probably will bring GL3 or stack optimization or both

                  now about wine, well i've foreseen that wine will work better with mesa/nvidia blob before it works well with fglrx (actually wine 2d apps like adobe cs4 suite works like a charm with the OSS driver unlike fglrx), well i actually dont blame the wine project cuz well mesa was in very bad shape until the new stack in recent versions and well fglrx is barely usable since not too long ago, so the only available "workable" solution was nvidia, lol if my employee's threated me to resign if we dont drop support for AMD linux, and we were only trying to add some basic telemetry 3d representations of geological/petroleum data (which is supposed to be a "simple" task, at least compared to real 3d complex stuff) i dont wanna imagine how should be with something so massive like write an entire emulation layer for DX9/10 in opengl.

                  maybe in Q2 or Q3 of 2011, we should have a feature wise graphic stack, not the fastest but good enough until optimization come in play aka GL 2.1/3.3, opencl tracker and better or optimal PM in the OSS stack.

                  about bluray and drm crap around, well that is surprisingly easy to hack from the hdmi part to the codec encrypt, so non an issue in any os actually and well stuff like dxt3 and blah blah, that can be easily hacked without much trouble and having opencl 1.0 working, i think devs wouldnt even need to know how the gfx do it in C, they can just create an equivalent directly in opencl and make it a plugin legal limbo

                  about UVD, well having an opencl tracker, i dont see the need of it anymore, just accelarating the codec in ffmpeg/theora/etc and XV should be enough to anything you need

                  so by that time AMD can drop or worry less about radeon cards and focus fglrx entirely on firegl cards, wich i think is the best solution we can both have aka superb OSS driver for mainstream and super closed driver for the drm junkies with the big $$$ cards

                  Comment


                  • #10
                    mm hate limit and edit stuff arggh

                    btw guy outstanding job in the OSS driver, now even my lolgfx in my laptop can play flash video fullscreen under kwin composite perfectly fluid using yesterday mesa git +1, i love ya

                    Comment


                    • #11
                      Originally posted by jrch2k8 View Post
                      lets face it fglrx 3d stack is very far from been optimal to begin with
                      Let's face it, fglrx stack is 10 times larger project than OSS graphics driver stack if not more (and probably larger than kernel), is better in pretty much every aspect (not counting the little regressions which make users so angry), and has tons and tons of great features Gallium will slowly be picking up throughout the following years. I mean if they made it open source, there would no longer be a need to develop drivers in Mesa anymore, those little fglrx regressions and other deficiencies would get fixed by the community, and open source graphics would jump from GL2.1 and no OpenCL to GL4.0 and working OpenCL with an advanced shader compiler and optimizer, memory manager, and whatnot. And Mesa/Gallium would slowly die because no one would care about it anymore.... that's how I see it.

                      Comment


                      • #12
                        Originally posted by marek View Post
                        Let's face it, fglrx stack is 10 times larger project than OSS graphics driver stack if not more (and probably larger than kernel), is better in pretty much every aspect (not counting the little regressions which make users so angry), and has tons and tons of great features Gallium will slowly be picking up throughout the following years. I mean if they made it open source, there would no longer be a need to develop drivers in Mesa anymore, those little fglrx regressions and other deficiencies would get fixed by the community, and open source graphics would jump from GL2.1 and no OpenCL to GL4.0 and working OpenCL with an advanced shader compiler and optimizer, memory manager, and whatnot. And Mesa/Gallium would slowly die because no one would care about it anymore.... that's how I see it.
                        Originally posted by marek View Post
                        fglrx stack is 10 times larger project than OSS graphics driver stack if not more
                        fair enough

                        Originally posted by marek View Post
                        and probably larger than kernel
                        no way in hell

                        Originally posted by marek View Post
                        is better in pretty much every aspect
                        well if you mean feature wise, aka at least the X or Y function is there and respond something good or wrong, yes ofc the OSS is pretty recent, that one is ovbious. now that it does the job and it do it perfectly or good enough, well no before ati was bought by amd, not 2 years ago, not 6 month ago, not today with the latest driver

                        Originally posted by marek View Post
                        (not counting the little regressions which make users so angry
                        little regression???? for real??? you call fglrx little regression? you own an AMD card? have you tried crossfire?

                        now seriously, i could call a glitch that maybe the got wrong certain function or well nvidia do some trcikery and got everyone using an X extesion in a non standart way so amd have to adapt it or figth it BUT

                        * 2d slowliness is not a glitch, i mean really i have 2 4850x2 (3200 cores) and i can see how the pixel get rendered 1 by 1 lol, that's no a little regression
                        * 3d well, fglrx is cool to hit 20000 in glxgears, everything else you can expect any sort of issues that goes from sigsegv, kernel panic (this are quite funny btw), running out of disc cuz the massive syslog warning from the kernel, wine is unusable even in some 2d apps, many native games fails to run (i give ya, this could be partly fault of the engine), shader get messed up depending the driver version (fglrx driver version choose process is as complex as select a good wine lol), composite well that is a beast of issues on his own that again go from massive slowness to massive memory leaks depending the driver version, that is not a simple regression.

                        *crossfire, well if in X driver version works (some version do, some version crack the hell of the mother of the kernel panics) normally is massive slower than windows, make games more problematic than normal (well in ubuntu beta driver improved a bit, at least the kernel panic dont force me to remove the power plug of my pc and happen less often), either way crossfire is something you want disabled unless you tested it enough

                        OpenCL, well is still beta i think, but im my testcases, well the library is there but it has too many issues. we were trying an opencl book example and it runned just fine everywhere aka (winX/mac/nvidia/AMD) but never in amd linux(the sdk provide a diff driver but you need to downgrade half distro to use it, i didint try it, at the end i just stopped caring). but well this is not a simple regression but i give you that having the source code of that library could accelerate OSS own's opencl development speed

                        UVD, well is basically garbage without a nasty combo of version library's, but well if you get it to at least render, well is not even close to vdpau or OSS XV quality, beside in my case, hung my computer after 35ishm of playing hd content (at least for me dont worth the figth, for me is easier just to put a gtx240 in my htpc, wich is like 3m lol), another not a simple regression

                        GL4, again could work as reference but is still very inmature and buggy, at least unigine heaven demo was very slow, aka my 8800gts 320 almost beated my quadfire with 3200 cores (i guess still need work cuz this api is too new), well i musts say is more like gl 3.3 cuz my card is 4xxx series, now i dont doubt maybe evergreen is a beast in gl4, aka the optimize it only to show off in evergreen, who knows

                        well, you name advanced memory manager, i would say advanced memory leaker, but well at least has got better than the 9 series of fglrx, and in very powerful system is less noticeable (still i dont like my gfx to steal 1gb of my ram but wth i have 7gb more to work) (i mean i log without anything 3d and the memory for as long as i work is around 500mb, now if i activate let say compiz or kwin in 30m my ram is getting near the 2.2gb and so on until it finally get stuff slower and i have to restart the xorg, obviously i tried with different distros and the OSS driver and it doesnt happen )

                        i dont say fglrx is completly worthless, i believe that hiring a nice QA manager they can improve a lot in the next years, my point is this bug'S arent few simple regression, this many bug are very heavy problem that even in the case of been released bu amd (which wont ever happen. btw. several thread about it), the community will have to wait a huge time to get something that worth the effort, up to this point i dont see the need of get fps parity with fglrx at the expense of all the time fixing those bugs, when the open stack can be much simpler to maintain and not that slow away from fglrx (when the optimization process get there obviously), and well opencl cant be that hard (is supposed to be royalty free from scracth so maybe get the docs should be easier for amd to release)

                        Comment


                        • #13
                          Some facts from me, to support jrch2k8 points:
                          On E5300,2x2gb ddr2-800, nvidia 9800gt green did 95000 points in non-composite and 80000 points in composite mode.
                          Same hardware HD4650(I acknowledge its tons weaker) did 6000 points(but not to this extent!).

                          If AMD upgrade fglrx, improve WINE support, improve 2D support, add video acceleration - they will land where Nvidia now ALREADY is.

                          Now Gallium3D with foss drivers is something that makes me sell 9800gt and go with AMD hardware now. Regardless of its low 3D performance and absence of h264 accel.

                          Comment


                          • #14
                            anyway about the topic, again i recognize that amd is putting some serious work in get fglrx in better shape, but is not a viable option for linux or AMD, as brigman repeadly say, fglrx wont be opensourced and frankly the big guys in the oss community dont want it either. remember what you said fglrx is a white mamut with millions of line of code shared with other oses (i dont wanna imagine that code btw it creeps me out just to), maintainability would be horrible, bug fixing will be horrid too. the idea of having this whole new stack is first to get maintainability, code sharing among several drivers outhere (remember amd is just a piece of the puzzle) when possible, have an standard approach for drivers, provide the best expirience out of the box, and finally gain the capability of been able to implement new feature oss or commertial as fast and efficiently possible and so far they are going great, but you have to remember this project is very new even if already has accomplished many wins in some departments, is still a very alpha software. so be patience until at least the features get there, then you will see more fps and as bridgam say the believe is possible to achieve with this new stack and mesa 70% of the fglrx performance, really cool maybe more who knows

                            Comment


                            • #15
                              linux kernel: 12 mio LoC
                              fglrx: 15 mio LoC (source)

                              That thing is huge.

                              But the comparison isn't toally fair. While the linux kernel includes drivers for a whole lot of hardware (not just GPUs), fglrx is more than a kernel module. Most of the driver is userspace stuff.

                              Comment

                              Working...
                              X