Announcement

Collapse
No announcement yet.

Port fglrx openGL stack to Gallium3D

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • jrch2k8
    replied
    Originally posted by rohcQaH View Post
    linux kernel: 12 mio LoC
    fglrx: 15 mio LoC (source)

    That thing is huge.

    But the comparison isn't toally fair. While the linux kernel includes drivers for a whole lot of hardware (not just GPUs), fglrx is more than a kernel module. Most of the driver is userspace stuff.
    well another thing is that fglrx is code share development, so many of those lines of code are not necesarilly linux specific i think, probably they can have some sort of compiler filter to compile for each os, not sure though, but if that is the case then probably the linux kernel is much bigger than the linux specific of fglrx, maybe

    Leave a comment:


  • rohcQaH
    replied
    linux kernel: 12 mio LoC
    fglrx: 15 mio LoC (source)

    That thing is huge.

    But the comparison isn't toally fair. While the linux kernel includes drivers for a whole lot of hardware (not just GPUs), fglrx is more than a kernel module. Most of the driver is userspace stuff.

    Leave a comment:


  • jrch2k8
    replied
    anyway about the topic, again i recognize that amd is putting some serious work in get fglrx in better shape, but is not a viable option for linux or AMD, as brigman repeadly say, fglrx wont be opensourced and frankly the big guys in the oss community dont want it either. remember what you said fglrx is a white mamut with millions of line of code shared with other oses (i dont wanna imagine that code btw it creeps me out just to), maintainability would be horrible, bug fixing will be horrid too. the idea of having this whole new stack is first to get maintainability, code sharing among several drivers outhere (remember amd is just a piece of the puzzle) when possible, have an standard approach for drivers, provide the best expirience out of the box, and finally gain the capability of been able to implement new feature oss or commertial as fast and efficiently possible and so far they are going great, but you have to remember this project is very new even if already has accomplished many wins in some departments, is still a very alpha software. so be patience until at least the features get there, then you will see more fps and as bridgam say the believe is possible to achieve with this new stack and mesa 70% of the fglrx performance, really cool maybe more who knows

    Leave a comment:


  • crazycheese
    replied
    Some facts from me, to support jrch2k8 points:
    On E5300,2x2gb ddr2-800, nvidia 9800gt green did 95000 points in non-composite and 80000 points in composite mode.
    Same hardware HD4650(I acknowledge its tons weaker) did 6000 points(but not to this extent!).

    If AMD upgrade fglrx, improve WINE support, improve 2D support, add video acceleration - they will land where Nvidia now ALREADY is.

    Now Gallium3D with foss drivers is something that makes me sell 9800gt and go with AMD hardware now. Regardless of its low 3D performance and absence of h264 accel.

    Leave a comment:


  • jrch2k8
    replied
    Originally posted by marek View Post
    Let's face it, fglrx stack is 10 times larger project than OSS graphics driver stack if not more (and probably larger than kernel), is better in pretty much every aspect (not counting the little regressions which make users so angry), and has tons and tons of great features Gallium will slowly be picking up throughout the following years. I mean if they made it open source, there would no longer be a need to develop drivers in Mesa anymore, those little fglrx regressions and other deficiencies would get fixed by the community, and open source graphics would jump from GL2.1 and no OpenCL to GL4.0 and working OpenCL with an advanced shader compiler and optimizer, memory manager, and whatnot. And Mesa/Gallium would slowly die because no one would care about it anymore.... that's how I see it.
    Originally posted by marek View Post
    fglrx stack is 10 times larger project than OSS graphics driver stack if not more
    fair enough

    Originally posted by marek View Post
    and probably larger than kernel
    no way in hell

    Originally posted by marek View Post
    is better in pretty much every aspect
    well if you mean feature wise, aka at least the X or Y function is there and respond something good or wrong, yes ofc the OSS is pretty recent, that one is ovbious. now that it does the job and it do it perfectly or good enough, well no before ati was bought by amd, not 2 years ago, not 6 month ago, not today with the latest driver

    Originally posted by marek View Post
    (not counting the little regressions which make users so angry
    little regression???? for real??? you call fglrx little regression? you own an AMD card? have you tried crossfire?

    now seriously, i could call a glitch that maybe the got wrong certain function or well nvidia do some trcikery and got everyone using an X extesion in a non standart way so amd have to adapt it or figth it BUT

    * 2d slowliness is not a glitch, i mean really i have 2 4850x2 (3200 cores) and i can see how the pixel get rendered 1 by 1 lol, that's no a little regression
    * 3d well, fglrx is cool to hit 20000 in glxgears, everything else you can expect any sort of issues that goes from sigsegv, kernel panic (this are quite funny btw), running out of disc cuz the massive syslog warning from the kernel, wine is unusable even in some 2d apps, many native games fails to run (i give ya, this could be partly fault of the engine), shader get messed up depending the driver version (fglrx driver version choose process is as complex as select a good wine lol), composite well that is a beast of issues on his own that again go from massive slowness to massive memory leaks depending the driver version, that is not a simple regression.

    *crossfire, well if in X driver version works (some version do, some version crack the hell of the mother of the kernel panics) normally is massive slower than windows, make games more problematic than normal (well in ubuntu beta driver improved a bit, at least the kernel panic dont force me to remove the power plug of my pc and happen less often), either way crossfire is something you want disabled unless you tested it enough

    OpenCL, well is still beta i think, but im my testcases, well the library is there but it has too many issues. we were trying an opencl book example and it runned just fine everywhere aka (winX/mac/nvidia/AMD) but never in amd linux(the sdk provide a diff driver but you need to downgrade half distro to use it, i didint try it, at the end i just stopped caring). but well this is not a simple regression but i give you that having the source code of that library could accelerate OSS own's opencl development speed

    UVD, well is basically garbage without a nasty combo of version library's, but well if you get it to at least render, well is not even close to vdpau or OSS XV quality, beside in my case, hung my computer after 35ishm of playing hd content (at least for me dont worth the figth, for me is easier just to put a gtx240 in my htpc, wich is like 3m lol), another not a simple regression

    GL4, again could work as reference but is still very inmature and buggy, at least unigine heaven demo was very slow, aka my 8800gts 320 almost beated my quadfire with 3200 cores (i guess still need work cuz this api is too new), well i musts say is more like gl 3.3 cuz my card is 4xxx series, now i dont doubt maybe evergreen is a beast in gl4, aka the optimize it only to show off in evergreen, who knows

    well, you name advanced memory manager, i would say advanced memory leaker, but well at least has got better than the 9 series of fglrx, and in very powerful system is less noticeable (still i dont like my gfx to steal 1gb of my ram but wth i have 7gb more to work) (i mean i log without anything 3d and the memory for as long as i work is around 500mb, now if i activate let say compiz or kwin in 30m my ram is getting near the 2.2gb and so on until it finally get stuff slower and i have to restart the xorg, obviously i tried with different distros and the OSS driver and it doesnt happen )

    i dont say fglrx is completly worthless, i believe that hiring a nice QA manager they can improve a lot in the next years, my point is this bug'S arent few simple regression, this many bug are very heavy problem that even in the case of been released bu amd (which wont ever happen. btw. several thread about it), the community will have to wait a huge time to get something that worth the effort, up to this point i dont see the need of get fps parity with fglrx at the expense of all the time fixing those bugs, when the open stack can be much simpler to maintain and not that slow away from fglrx (when the optimization process get there obviously), and well opencl cant be that hard (is supposed to be royalty free from scracth so maybe get the docs should be easier for amd to release)

    Leave a comment:


  • marek
    replied
    Originally posted by jrch2k8 View Post
    lets face it fglrx 3d stack is very far from been optimal to begin with
    Let's face it, fglrx stack is 10 times larger project than OSS graphics driver stack if not more (and probably larger than kernel), is better in pretty much every aspect (not counting the little regressions which make users so angry), and has tons and tons of great features Gallium will slowly be picking up throughout the following years. I mean if they made it open source, there would no longer be a need to develop drivers in Mesa anymore, those little fglrx regressions and other deficiencies would get fixed by the community, and open source graphics would jump from GL2.1 and no OpenCL to GL4.0 and working OpenCL with an advanced shader compiler and optimizer, memory manager, and whatnot. And Mesa/Gallium would slowly die because no one would care about it anymore.... that's how I see it.

    Leave a comment:


  • jrch2k8
    replied
    mm hate limit and edit stuff arggh

    btw guy outstanding job in the OSS driver, now even my lolgfx in my laptop can play flash video fullscreen under kwin composite perfectly fluid using yesterday mesa git +1, i love ya

    Leave a comment:


  • jrch2k8
    replied
    i dont wanna be rude or anything but i think port fglrx 3d part to g3d even will be a pain in the ass for both parties. lets face it fglrx 3d stack is very far from been optimal to begin with, let's no talk about stability and compatibility(i know is shared code and some ppl work hard to make it better, but well). what possible benefit you can get from that stack? and how many and i mean time of bugfixing you think will require from part of AMD to polish that mess, if even today in their own stack is that bugged (i know wine issues are both parties faults and using very old kernel/xorg helps too, and passing a number of parameters to the kernel boot process you can even get something that works semi decently, but the point is im not supposed to do all that in a commertial product, especially since i payed 600$ for my gfx's).

    now on the other hand the OSS 3d stack is getting very rock solid and stable and everyday you discover something new working very cool, really im not having any issues in anything using mesa since really long time ago.

    lol, im using the latest git code around, the latest kernel code available, lol even the unstable kms/drm branches in all my amd gfx pc and amazingly i never have more than 1% or less of the issues i've been having with the latest fglrx, that tell you something you know. i know some stuff is missing and some native games dont work just yet, well hon and nexuiz are actually quite good in mesa 7.9 but well you get the idea. actually for average joe desktop use the oss driver is ligth years superior to fglrx already (i meant fast 2d render, web browsing, basic 3d stuff, some games like nexuiz or hon(playable not fps king) and an excelent compiz/kwin composite experience).

    now i agree OSS is in heavy developemnt and g3d is wip so we dont have a fully featured stable driver to compare with, but the solution is not port fglrx mess to oss, the solution is give time to the OSS new stack to get feature parity (not that far away, mesa 7.9 is getting really close to support GL 2.1, which basically what you need for everything for now) and mesa 8.0 probably will bring GL3 or stack optimization or both

    now about wine, well i've foreseen that wine will work better with mesa/nvidia blob before it works well with fglrx (actually wine 2d apps like adobe cs4 suite works like a charm with the OSS driver unlike fglrx), well i actually dont blame the wine project cuz well mesa was in very bad shape until the new stack in recent versions and well fglrx is barely usable since not too long ago, so the only available "workable" solution was nvidia, lol if my employee's threated me to resign if we dont drop support for AMD linux, and we were only trying to add some basic telemetry 3d representations of geological/petroleum data (which is supposed to be a "simple" task, at least compared to real 3d complex stuff) i dont wanna imagine how should be with something so massive like write an entire emulation layer for DX9/10 in opengl.

    maybe in Q2 or Q3 of 2011, we should have a feature wise graphic stack, not the fastest but good enough until optimization come in play aka GL 2.1/3.3, opencl tracker and better or optimal PM in the OSS stack.

    about bluray and drm crap around, well that is surprisingly easy to hack from the hdmi part to the codec encrypt, so non an issue in any os actually and well stuff like dxt3 and blah blah, that can be easily hacked without much trouble and having opencl 1.0 working, i think devs wouldnt even need to know how the gfx do it in C, they can just create an equivalent directly in opencl and make it a plugin legal limbo

    about UVD, well having an opencl tracker, i dont see the need of it anymore, just accelarating the codec in ffmpeg/theora/etc and XV should be enough to anything you need

    so by that time AMD can drop or worry less about radeon cards and focus fglrx entirely on firegl cards, wich i think is the best solution we can both have aka superb OSS driver for mainstream and super closed driver for the drm junkies with the big $$$ cards

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by bridgman View Post
    the *current* Gallium3D implementation (and API) is probably going to need significant changes before it can support GL 4.0 and OpenCL.
    this dosn't sound so good... but yes... do it Faster a bulled can fly

    Leave a comment:


  • bridgman
    replied
    I guess the key point here is that the developers are saying "the cool new stuff users want to see is going to come after we move to Gallium3D", but sometimes that is being interpreted as "Gallium3D gives us all this stuff for free", rather than "the developers generally think the Gallium3D architecture is the right place to invest their time, so they're going to try to move to Gallium3D early rather than adding a bunch of features to the classic HW driver and *then* moving to Gallium3D and abandoning the code they just enhanced.

    Sorry about the long sentence

    We are doing new HW support on the classic HW driver model but that doesn't require a big investment in *improving* the driver code, just adding new paths to the existing code.

    Leave a comment:

Working...
X