Page 2 of 4 FirstFirst 1234 LastLast
Results 11 to 20 of 35

Thread: Port fglrx openGL stack to Gallium3D

  1. #11
    Join Date
    Jan 2009
    Posts
    630

    Default

    Quote Originally Posted by jrch2k8 View Post
    lets face it fglrx 3d stack is very far from been optimal to begin with
    Let's face it, fglrx stack is 10 times larger project than OSS graphics driver stack if not more (and probably larger than kernel), is better in pretty much every aspect (not counting the little regressions which make users so angry), and has tons and tons of great features Gallium will slowly be picking up throughout the following years. I mean if they made it open source, there would no longer be a need to develop drivers in Mesa anymore, those little fglrx regressions and other deficiencies would get fixed by the community, and open source graphics would jump from GL2.1 and no OpenCL to GL4.0 and working OpenCL with an advanced shader compiler and optimizer, memory manager, and whatnot. And Mesa/Gallium would slowly die because no one would care about it anymore.... that's how I see it.

  2. #12
    Join Date
    Jun 2009
    Posts
    1,191

    Default

    Quote Originally Posted by marek View Post
    Let's face it, fglrx stack is 10 times larger project than OSS graphics driver stack if not more (and probably larger than kernel), is better in pretty much every aspect (not counting the little regressions which make users so angry), and has tons and tons of great features Gallium will slowly be picking up throughout the following years. I mean if they made it open source, there would no longer be a need to develop drivers in Mesa anymore, those little fglrx regressions and other deficiencies would get fixed by the community, and open source graphics would jump from GL2.1 and no OpenCL to GL4.0 and working OpenCL with an advanced shader compiler and optimizer, memory manager, and whatnot. And Mesa/Gallium would slowly die because no one would care about it anymore.... that's how I see it.
    Quote Originally Posted by marek View Post
    fglrx stack is 10 times larger project than OSS graphics driver stack if not more
    fair enough

    Quote Originally Posted by marek View Post
    and probably larger than kernel
    no way in hell

    Quote Originally Posted by marek View Post
    is better in pretty much every aspect
    well if you mean feature wise, aka at least the X or Y function is there and respond something good or wrong, yes ofc the OSS is pretty recent, that one is ovbious. now that it does the job and it do it perfectly or good enough, well no before ati was bought by amd, not 2 years ago, not 6 month ago, not today with the latest driver

    Quote Originally Posted by marek View Post
    (not counting the little regressions which make users so angry
    little regression???? for real??? you call fglrx little regression? you own an AMD card? have you tried crossfire?

    now seriously, i could call a glitch that maybe the got wrong certain function or well nvidia do some trcikery and got everyone using an X extesion in a non standart way so amd have to adapt it or figth it BUT

    * 2d slowliness is not a glitch, i mean really i have 2 4850x2 (3200 cores) and i can see how the pixel get rendered 1 by 1 lol, that's no a little regression
    * 3d well, fglrx is cool to hit 20000 in glxgears, everything else you can expect any sort of issues that goes from sigsegv, kernel panic (this are quite funny btw), running out of disc cuz the massive syslog warning from the kernel, wine is unusable even in some 2d apps, many native games fails to run (i give ya, this could be partly fault of the engine), shader get messed up depending the driver version (fglrx driver version choose process is as complex as select a good wine lol), composite well that is a beast of issues on his own that again go from massive slowness to massive memory leaks depending the driver version, that is not a simple regression.

    *crossfire, well if in X driver version works (some version do, some version crack the hell of the mother of the kernel panics) normally is massive slower than windows, make games more problematic than normal (well in ubuntu beta driver improved a bit, at least the kernel panic dont force me to remove the power plug of my pc and happen less often), either way crossfire is something you want disabled unless you tested it enough

    OpenCL, well is still beta i think, but im my testcases, well the library is there but it has too many issues. we were trying an opencl book example and it runned just fine everywhere aka (winX/mac/nvidia/AMD) but never in amd linux(the sdk provide a diff driver but you need to downgrade half distro to use it, i didint try it, at the end i just stopped caring). but well this is not a simple regression but i give you that having the source code of that library could accelerate OSS own's opencl development speed

    UVD, well is basically garbage without a nasty combo of version library's, but well if you get it to at least render, well is not even close to vdpau or OSS XV quality, beside in my case, hung my computer after 35ishm of playing hd content (at least for me dont worth the figth, for me is easier just to put a gtx240 in my htpc, wich is like 3m lol), another not a simple regression

    GL4, again could work as reference but is still very inmature and buggy, at least unigine heaven demo was very slow, aka my 8800gts 320 almost beated my quadfire with 3200 cores (i guess still need work cuz this api is too new), well i musts say is more like gl 3.3 cuz my card is 4xxx series, now i dont doubt maybe evergreen is a beast in gl4, aka the optimize it only to show off in evergreen, who knows

    well, you name advanced memory manager, i would say advanced memory leaker, but well at least has got better than the 9 series of fglrx, and in very powerful system is less noticeable (still i dont like my gfx to steal 1gb of my ram but wth i have 7gb more to work) (i mean i log without anything 3d and the memory for as long as i work is around 500mb, now if i activate let say compiz or kwin in 30m my ram is getting near the 2.2gb and so on until it finally get stuff slower and i have to restart the xorg, obviously i tried with different distros and the OSS driver and it doesnt happen )

    i dont say fglrx is completly worthless, i believe that hiring a nice QA manager they can improve a lot in the next years, my point is this bug'S arent few simple regression, this many bug are very heavy problem that even in the case of been released bu amd (which wont ever happen. btw. several thread about it), the community will have to wait a huge time to get something that worth the effort, up to this point i dont see the need of get fps parity with fglrx at the expense of all the time fixing those bugs, when the open stack can be much simpler to maintain and not that slow away from fglrx (when the optimization process get there obviously), and well opencl cant be that hard (is supposed to be royalty free from scracth so maybe get the docs should be easier for amd to release)

  3. #13
    Join Date
    Apr 2010
    Posts
    1,946

    Default

    Some facts from me, to support jrch2k8 points:
    On E5300,2x2gb ddr2-800, nvidia 9800gt green did 95000 points in non-composite and 80000 points in composite mode.
    Same hardware HD4650(I acknowledge its tons weaker) did 6000 points(but not to this extent!).

    If AMD upgrade fglrx, improve WINE support, improve 2D support, add video acceleration - they will land where Nvidia now ALREADY is.

    Now Gallium3D with foss drivers is something that makes me sell 9800gt and go with AMD hardware now. Regardless of its low 3D performance and absence of h264 accel.

  4. #14
    Join Date
    Jun 2009
    Posts
    1,191

    Default

    anyway about the topic, again i recognize that amd is putting some serious work in get fglrx in better shape, but is not a viable option for linux or AMD, as brigman repeadly say, fglrx wont be opensourced and frankly the big guys in the oss community dont want it either. remember what you said fglrx is a white mamut with millions of line of code shared with other oses (i dont wanna imagine that code btw it creeps me out just to), maintainability would be horrible, bug fixing will be horrid too. the idea of having this whole new stack is first to get maintainability, code sharing among several drivers outhere (remember amd is just a piece of the puzzle) when possible, have an standard approach for drivers, provide the best expirience out of the box, and finally gain the capability of been able to implement new feature oss or commertial as fast and efficiently possible and so far they are going great, but you have to remember this project is very new even if already has accomplished many wins in some departments, is still a very alpha software. so be patience until at least the features get there, then you will see more fps and as bridgam say the believe is possible to achieve with this new stack and mesa 70% of the fglrx performance, really cool maybe more who knows

  5. #15
    Join Date
    Nov 2008
    Posts
    784

    Default

    linux kernel: 12 mio LoC
    fglrx: 15 mio LoC (source)

    That thing is huge.

    But the comparison isn't toally fair. While the linux kernel includes drivers for a whole lot of hardware (not just GPUs), fglrx is more than a kernel module. Most of the driver is userspace stuff.

  6. #16
    Join Date
    Jun 2009
    Posts
    1,191

    Default

    Quote Originally Posted by rohcQaH View Post
    linux kernel: 12 mio LoC
    fglrx: 15 mio LoC (source)

    That thing is huge.

    But the comparison isn't toally fair. While the linux kernel includes drivers for a whole lot of hardware (not just GPUs), fglrx is more than a kernel module. Most of the driver is userspace stuff.
    well another thing is that fglrx is code share development, so many of those lines of code are not necesarilly linux specific i think, probably they can have some sort of compiler filter to compile for each os, not sure though, but if that is the case then probably the linux kernel is much bigger than the linux specific of fglrx, maybe

  7. #17
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Hell you are Good... 1A++++++ Good Job !

    the Opensource driver will have openGL4 and OpenCL and viedeo acceleration faster than FGLRX fix any of your bug!



    Quote Originally Posted by jrch2k8 View Post
    fair enough


    no way in hell


    well if you mean feature wise, aka at least the X or Y function is there and respond something good or wrong, yes ofc the OSS is pretty recent, that one is ovbious. now that it does the job and it do it perfectly or good enough, well no before ati was bought by amd, not 2 years ago, not 6 month ago, not today with the latest driver


    little regression???? for real??? you call fglrx little regression? you own an AMD card? have you tried crossfire?

    now seriously, i could call a glitch that maybe the got wrong certain function or well nvidia do some trcikery and got everyone using an X extesion in a non standart way so amd have to adapt it or figth it BUT

    * 2d slowliness is not a glitch, i mean really i have 2 4850x2 (3200 cores) and i can see how the pixel get rendered 1 by 1 lol, that's no a little regression
    * 3d well, fglrx is cool to hit 20000 in glxgears, everything else you can expect any sort of issues that goes from sigsegv, kernel panic (this are quite funny btw), running out of disc cuz the massive syslog warning from the kernel, wine is unusable even in some 2d apps, many native games fails to run (i give ya, this could be partly fault of the engine), shader get messed up depending the driver version (fglrx driver version choose process is as complex as select a good wine lol), composite well that is a beast of issues on his own that again go from massive slowness to massive memory leaks depending the driver version, that is not a simple regression.

    *crossfire, well if in X driver version works (some version do, some version crack the hell of the mother of the kernel panics) normally is massive slower than windows, make games more problematic than normal (well in ubuntu beta driver improved a bit, at least the kernel panic dont force me to remove the power plug of my pc and happen less often), either way crossfire is something you want disabled unless you tested it enough

    OpenCL, well is still beta i think, but im my testcases, well the library is there but it has too many issues. we were trying an opencl book example and it runned just fine everywhere aka (winX/mac/nvidia/AMD) but never in amd linux(the sdk provide a diff driver but you need to downgrade half distro to use it, i didint try it, at the end i just stopped caring). but well this is not a simple regression but i give you that having the source code of that library could accelerate OSS own's opencl development speed

    UVD, well is basically garbage without a nasty combo of version library's, but well if you get it to at least render, well is not even close to vdpau or OSS XV quality, beside in my case, hung my computer after 35ishm of playing hd content (at least for me dont worth the figth, for me is easier just to put a gtx240 in my htpc, wich is like 3m lol), another not a simple regression

    GL4, again could work as reference but is still very inmature and buggy, at least unigine heaven demo was very slow, aka my 8800gts 320 almost beated my quadfire with 3200 cores (i guess still need work cuz this api is too new), well i musts say is more like gl 3.3 cuz my card is 4xxx series, now i dont doubt maybe evergreen is a beast in gl4, aka the optimize it only to show off in evergreen, who knows

    well, you name advanced memory manager, i would say advanced memory leaker, but well at least has got better than the 9 series of fglrx, and in very powerful system is less noticeable (still i dont like my gfx to steal 1gb of my ram but wth i have 7gb more to work) (i mean i log without anything 3d and the memory for as long as i work is around 500mb, now if i activate let say compiz or kwin in 30m my ram is getting near the 2.2gb and so on until it finally get stuff slower and i have to restart the xorg, obviously i tried with different distros and the OSS driver and it doesnt happen )

    i dont say fglrx is completly worthless, i believe that hiring a nice QA manager they can improve a lot in the next years, my point is this bug'S arent few simple regression, this many bug are very heavy problem that even in the case of been released bu amd (which wont ever happen. btw. several thread about it), the community will have to wait a huge time to get something that worth the effort, up to this point i dont see the need of get fps parity with fglrx at the expense of all the time fixing those bugs, when the open stack can be much simpler to maintain and not that slow away from fglrx (when the optimization process get there obviously), and well opencl cant be that hard (is supposed to be royalty free from scracth so maybe get the docs should be easier for amd to release)

  8. #18
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by jrch2k8 View Post
    well another thing is that fglrx is code share development, so many of those lines of code are not necesarilly linux specific i think, probably they can have some sort of compiler filter to compile for each os, not sure though, but if that is the case then probably the linux kernel is much bigger than the linux specific of fglrx, maybe
    imagine it: every line of code have this. #this line/part is only to bugfix the clousedsource windows system...

    the real overall code is just 10 lines

  9. #19
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,153

    Default

    Quote Originally Posted by crazycheese View Post
    Some facts from me, to support jrch2k8 points:
    On E5300,2x2gb ddr2-800, nvidia 9800gt green did 95000 points in non-composite and 80000 points in composite mode.
    Same hardware HD4650(I acknowledge its tons weaker) did 6000 points(but not to this extent!).
    Repeat after me: glxgears is not a benchmark. Don't try to use it as one, because its results are FUCKING INVALID.

    There, better now?

    In fact, fglrx performs identically to the Windows driver in OpenGL (sometimes slightly faster, too). The rest of your points are being addressed as we speak (better 2d acceleration, video acceleration).

    Bah.

  10. #20
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by BlackStar View Post
    Repeat after me: glxgears is not a benchmark. Don't try to use it as one, because its results are FUCKING INVALID.

    There, better now?

    In fact, fglrx performs identically to the Windows driver in OpenGL (sometimes slightly faster, too). The rest of your points are being addressed as we speak (better 2d acceleration, video acceleration).

    Bah.
    thats a lie if you use windows you use dx11 and the catalyst is much faster in dx11 in the same 3D rendering scene compare to an openGL4 one!

    thats because there is no full-featured ogl4 driver right now...

    in my point of view they only try to kill openGL with there Windows-DirectX 'first' strategic!

    your OGL argument on windows is just a joge because all modern engines do have a nativ directX render path!

    in the end the openGL users are the losers at this game called the 3D war!

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •