Announcement

Collapse
No announcement yet.

Port fglrx openGL stack to Gallium3D

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by marek View Post
    Let's face it, fglrx stack is 10 times larger project than OSS graphics driver stack if not more (and probably larger than kernel), is better in pretty much every aspect (not counting the little regressions which make users so angry), and has tons and tons of great features Gallium will slowly be picking up throughout the following years. I mean if they made it open source, there would no longer be a need to develop drivers in Mesa anymore, those little fglrx regressions and other deficiencies would get fixed by the community, and open source graphics would jump from GL2.1 and no OpenCL to GL4.0 and working OpenCL with an advanced shader compiler and optimizer, memory manager, and whatnot. And Mesa/Gallium would slowly die because no one would care about it anymore.... that's how I see it.
    Originally posted by marek View Post
    fglrx stack is 10 times larger project than OSS graphics driver stack if not more
    fair enough

    Originally posted by marek View Post
    and probably larger than kernel
    no way in hell

    Originally posted by marek View Post
    is better in pretty much every aspect
    well if you mean feature wise, aka at least the X or Y function is there and respond something good or wrong, yes ofc the OSS is pretty recent, that one is ovbious. now that it does the job and it do it perfectly or good enough, well no before ati was bought by amd, not 2 years ago, not 6 month ago, not today with the latest driver

    Originally posted by marek View Post
    (not counting the little regressions which make users so angry
    little regression???? for real??? you call fglrx little regression? you own an AMD card? have you tried crossfire?

    now seriously, i could call a glitch that maybe the got wrong certain function or well nvidia do some trcikery and got everyone using an X extesion in a non standart way so amd have to adapt it or figth it BUT

    * 2d slowliness is not a glitch, i mean really i have 2 4850x2 (3200 cores) and i can see how the pixel get rendered 1 by 1 lol, that's no a little regression
    * 3d well, fglrx is cool to hit 20000 in glxgears, everything else you can expect any sort of issues that goes from sigsegv, kernel panic (this are quite funny btw), running out of disc cuz the massive syslog warning from the kernel, wine is unusable even in some 2d apps, many native games fails to run (i give ya, this could be partly fault of the engine), shader get messed up depending the driver version (fglrx driver version choose process is as complex as select a good wine lol), composite well that is a beast of issues on his own that again go from massive slowness to massive memory leaks depending the driver version, that is not a simple regression.

    *crossfire, well if in X driver version works (some version do, some version crack the hell of the mother of the kernel panics) normally is massive slower than windows, make games more problematic than normal (well in ubuntu beta driver improved a bit, at least the kernel panic dont force me to remove the power plug of my pc and happen less often), either way crossfire is something you want disabled unless you tested it enough

    OpenCL, well is still beta i think, but im my testcases, well the library is there but it has too many issues. we were trying an opencl book example and it runned just fine everywhere aka (winX/mac/nvidia/AMD) but never in amd linux(the sdk provide a diff driver but you need to downgrade half distro to use it, i didint try it, at the end i just stopped caring). but well this is not a simple regression but i give you that having the source code of that library could accelerate OSS own's opencl development speed

    UVD, well is basically garbage without a nasty combo of version library's, but well if you get it to at least render, well is not even close to vdpau or OSS XV quality, beside in my case, hung my computer after 35ishm of playing hd content (at least for me dont worth the figth, for me is easier just to put a gtx240 in my htpc, wich is like 3m lol), another not a simple regression

    GL4, again could work as reference but is still very inmature and buggy, at least unigine heaven demo was very slow, aka my 8800gts 320 almost beated my quadfire with 3200 cores (i guess still need work cuz this api is too new), well i musts say is more like gl 3.3 cuz my card is 4xxx series, now i dont doubt maybe evergreen is a beast in gl4, aka the optimize it only to show off in evergreen, who knows

    well, you name advanced memory manager, i would say advanced memory leaker, but well at least has got better than the 9 series of fglrx, and in very powerful system is less noticeable (still i dont like my gfx to steal 1gb of my ram but wth i have 7gb more to work) (i mean i log without anything 3d and the memory for as long as i work is around 500mb, now if i activate let say compiz or kwin in 30m my ram is getting near the 2.2gb and so on until it finally get stuff slower and i have to restart the xorg, obviously i tried with different distros and the OSS driver and it doesnt happen )

    i dont say fglrx is completly worthless, i believe that hiring a nice QA manager they can improve a lot in the next years, my point is this bug'S arent few simple regression, this many bug are very heavy problem that even in the case of been released bu amd (which wont ever happen. btw. several thread about it), the community will have to wait a huge time to get something that worth the effort, up to this point i dont see the need of get fps parity with fglrx at the expense of all the time fixing those bugs, when the open stack can be much simpler to maintain and not that slow away from fglrx (when the optimization process get there obviously), and well opencl cant be that hard (is supposed to be royalty free from scracth so maybe get the docs should be easier for amd to release)

    Comment


    • #12
      Some facts from me, to support jrch2k8 points:
      On E5300,2x2gb ddr2-800, nvidia 9800gt green did 95000 points in non-composite and 80000 points in composite mode.
      Same hardware HD4650(I acknowledge its tons weaker) did 6000 points(but not to this extent!).

      If AMD upgrade fglrx, improve WINE support, improve 2D support, add video acceleration - they will land where Nvidia now ALREADY is.

      Now Gallium3D with foss drivers is something that makes me sell 9800gt and go with AMD hardware now. Regardless of its low 3D performance and absence of h264 accel.

      Comment


      • #13
        anyway about the topic, again i recognize that amd is putting some serious work in get fglrx in better shape, but is not a viable option for linux or AMD, as brigman repeadly say, fglrx wont be opensourced and frankly the big guys in the oss community dont want it either. remember what you said fglrx is a white mamut with millions of line of code shared with other oses (i dont wanna imagine that code btw it creeps me out just to), maintainability would be horrible, bug fixing will be horrid too. the idea of having this whole new stack is first to get maintainability, code sharing among several drivers outhere (remember amd is just a piece of the puzzle) when possible, have an standard approach for drivers, provide the best expirience out of the box, and finally gain the capability of been able to implement new feature oss or commertial as fast and efficiently possible and so far they are going great, but you have to remember this project is very new even if already has accomplished many wins in some departments, is still a very alpha software. so be patience until at least the features get there, then you will see more fps and as bridgam say the believe is possible to achieve with this new stack and mesa 70% of the fglrx performance, really cool maybe more who knows

        Comment


        • #14
          linux kernel: 12 mio LoC
          fglrx: 15 mio LoC (source)

          That thing is huge.

          But the comparison isn't toally fair. While the linux kernel includes drivers for a whole lot of hardware (not just GPUs), fglrx is more than a kernel module. Most of the driver is userspace stuff.

          Comment


          • #15
            Originally posted by rohcQaH View Post
            linux kernel: 12 mio LoC
            fglrx: 15 mio LoC (source)

            That thing is huge.

            But the comparison isn't toally fair. While the linux kernel includes drivers for a whole lot of hardware (not just GPUs), fglrx is more than a kernel module. Most of the driver is userspace stuff.
            well another thing is that fglrx is code share development, so many of those lines of code are not necesarilly linux specific i think, probably they can have some sort of compiler filter to compile for each os, not sure though, but if that is the case then probably the linux kernel is much bigger than the linux specific of fglrx, maybe

            Comment


            • #16
              Originally posted by crazycheese View Post
              Some facts from me, to support jrch2k8 points:
              On E5300,2x2gb ddr2-800, nvidia 9800gt green did 95000 points in non-composite and 80000 points in composite mode.
              Same hardware HD4650(I acknowledge its tons weaker) did 6000 points(but not to this extent!).
              Repeat after me: glxgears is not a benchmark. Don't try to use it as one, because its results are FUCKING INVALID.

              There, better now?

              In fact, fglrx performs identically to the Windows driver in OpenGL (sometimes slightly faster, too). The rest of your points are being addressed as we speak (better 2d acceleration, video acceleration).

              Bah.

              Comment


              • #17
                Originally posted by BlackStar View Post
                Repeat after me: glxgears is not a benchmark. Don't try to use it as one, because its results are FUCKING INVALID.

                There, better now?

                In fact, fglrx performs identically to the Windows driver in OpenGL (sometimes slightly faster, too). The rest of your points are being addressed as we speak (better 2d acceleration, video acceleration).

                Bah.
                Yes, I have read this on unofficial ATI site, when I had the card im my hands.

                fglrx: OpenArena 40-60fps(fullhd,maxed out).

                nvidia: With 9800gt - 300fps(fullhd,maxed out).
                (for comparsion, 6800gt with gddr3 (asus v9999) on athlon xp 3200+ -> 120fps fullhd, maxed out)

                Technically 4650 is weaker than 9800, additionally 4650 - 128bit DDR2; 9800 - 256bit GDDR3. So I think this is because gpu staves for the lack of ram bandwidth.

                But then nvidia makes crap and closes drivers whilst ATI does A LOT for opensource drivers, so I bought hd4770(faster chip than 9800gt, 128bit GDDR5 on 4770==256bit GDDR3 on 9800; lower idle power consumption). I have it in my hware, Im trying to recompile lastest kernel to use it(typing this in vesa). Not much trouble, gentoo anyway.



                The trouble is if this issues will be if all that options you talk (vid accel, 2d - is already here I know, improved opengl) will be handled in fglrx. Then it will be as in nvidia, but later and worser. I have switched to amd 4770 only because of their opensource effort, yes I appreciate it that much.

                Comment


                • #18
                  Originally posted by BlackStar View Post
                  Repeat after me: glxgears is not a benchmark. Don't try to use it as one, because its results are FUCKING INVALID.

                  There, better now?

                  In fact, fglrx performs identically to the Windows driver in OpenGL (sometimes slightly faster, too). The rest of your points are being addressed as we speak (better 2d acceleration, video acceleration).

                  Bah.
                  mmm i can be wrong but as far i have tested, the relation of fglrx and catalyst is more like this

                  DX9/10/10.1/11 >>>>>> WGL4 which >>>>>>>>>>>>>>> fglrx

                  download the unigine demos(i think those are complex enough to see the diff) for both oses and running with both codepaths, at least in my hardware the difference is very noticeable

                  windows7x64
                  kubuntu 10.04 rc
                  4850x2 x2 aka quadfire
                  quadcore cpu

                  now on the nvidia side, opengl in linux is like 10% slower than directx in windows, wich is completly acceptable for me

                  Comment


                  • #19
                    Originally posted by crazycheese View Post
                    yes I appreciate it that much.[/U]
                    amen brother, the only reason i didnt instantly rma my quadfire is cuz i heard in time about AMD FOSS work, so yes I appreciate it that much too

                    Comment


                    • #20
                      Originally posted by jrch2k8 View Post
                      amen brother, the only reason i didnt instantly rma my quadfire is cuz i heard in time about AMD FOSS work, so yes I appreciate it that much too
                      intel E5300-->amd athlon II x4 630(instead of intel i3-540)
                      s775 asrock p43me--> amd-based am3 gigabyte GA-MA785GMT-UD2H, instead on intel s1150 board
                      nvidia 9800gt-->powercolor hd4770

                      All because amd supports foss and nvidia regresses totally.

                      Intel does not provide performance 3d hardware and overall it is pricy for same performance, so sorry, intel.

                      And I would really appreciate AMD create linux counter, where you can register your hardware as running linux. As main desktop and home os, not as workstation one.

                      Comment

                      Working...
                      X