jrch2k8, what kind of performance difference (linux vs windows, amd vs nvidia) do you see running with a single GPU rather than a 4-GPU crossfire rig ?
Announcement
Collapse
No announcement yet.
Port fglrx openGL stack to Gallium3D
Collapse
X
-
Originally posted by bridgman View Postjrch2k8, what kind of performance difference (linux vs windows, amd vs nvidia) do you see running with a single GPU rather than a 4-GPU crossfire rig ?
Comment
-
@bridgman:
I think the key interesting idea from this thread is to make some kind of open source thing that sits in between fglrx and kernel/xorg/whatever in order to improve the ability to follow the bleeding edge with fglrx.
Right now, fglrx breaks whenever the kernel or xorg is changed.
The kernel part is generally manageable since that interface is open source. The big problem is that fglrx breaks whenever the xserver changes.
But right now we have an open source X driver that *works*, and follows the xserver. So what these guys want is to be able to take the existing SOLID open source parts and mix in certain chunks (the acceleration chunks) of fglrx in order to come up with an overall driver package that doesn't break every time someone sneezes.
Which is not a bad idea. Even if it would probably be extremely expensive to implement.
(note: I'm not interested in this -- I personally am happy with the progress and function of the open source drivers.)
Comment
-
Originally posted by bridgman View Postjrch2k8, what kind of performance difference (linux vs windows, amd vs nvidia) do you see running with a single GPU rather than a 4-GPU crossfire rig ?
Also I do not think his Quadfire setup has any benefits over a single HD4850x2 since the CPU will be most likely not able to feed the cards properly. Yet he's expeting the holy grail and more ...
Comment
-
Originally posted by BlackStar View PostRepeat after me: glxgears is not a benchmark. Don't try to use it as one, because its results are FUCKING INVALID.
There, better now?
In fact, fglrx performs identically to the Windows driver in OpenGL (sometimes slightly faster, too). The rest of your points are being addressed as we speak (better 2d acceleration, video acceleration).
Bah.
It's a good part of where the bitching about fglrx stems from, actually. If you didn't know what was going on and why- you'd be peeved that they couldn't get "simple" things right like fglrx does screw up on- and we won't get into Crossfire, etc. which is their baby and should've been there already in stable form in the minds of the community at large.
Comment
-
Originally posted by droidhacker View PostRight now, fglrx breaks whenever the kernel or xorg is changed.
The kernel part is generally manageable since that interface is open source. The big problem is that fglrx breaks whenever the xserver changes.
Comment
-
Originally posted by haplo602 View PostWhy bother bridgman ? I know you are trying to be nice here, but the guy has no clue what he's talking about .... he's comparing DX11 to OpenGL4 on cards that are capable of neither (hint ... HD4850x2).
Also I do not think his Quadfire setup has any benefits over a single HD4850x2 since the CPU will be most likely not able to feed the cards properly. Yet he's expeting the holy grail and more ...
about the quafire im aware it doesnt scale well, not at least in low resolution but my main purpose of this quadfire is to have something powerful to play with opencl calculation, so is not like im specting 500fps in COD mw2 or anything like that, is just when i tested the driver i was lazy to open my case and remove the second card. either way having the second card shoudlnt kill the performance but i agree that is not an impossible either. no for now my linux is too bleeding edge for fglrx so i have to make a clean install, and for that ill wait for my new disk cuz well im lazy to downgrade my distro. now if someone else have a dual boot system you could do some test with both oses and check if you performance is close or not cuz well i dont drop the possibility that fglrx just dont like X2 cards and this is just an specific case
Comment
-
Originally posted by Svartalf View PostThe big problem for them would be that it's more of a moving target than the way they're doing things right now. The main reason that the FOSS driver works as well as it does is that it's in lock-step with the Gallium3D API edge because it's part and parcel of that project. For them, it's a fairly extensive re-write for the parts that are breaking like you state- only to get to an edge that does the same thing on them with the same level of regularity right at the moment.
And FYI: I don't agree with you.
The KERNEL end of fglrx works the way the OP suggested. Its mainly the xserver end that breaks. Sure the changing kernel can break fglrx, but fglrx comes with the SOURCE CODE for the kernel interface, so that can be fixed by the community to a certain extend. What is needed is a similar open source INTERFACE for the xserver.
Current:
kernel -- open source kernel interface -- fglrx
xserver -- fglrx
Wanted:
kernel -- open source kernel interface -- fglrx
xserver -- open source xserver interface -- fglrx
Comment
-
And the thing about it is this;
A LOT of fglrx is NOT NEEDED (strictly speaking). The open source xorg drivers are good for most things everyone does (in fact, typically BETTER than fglrx), so the second component to the OP's dream involves cutting all the parts that the OP perceives as REDUNDANT, leaving the 3D acceleration components to stick in via G3D/mesa. THIS is the part of his request that would be really tough to implement.
Comment
-
Originally posted by bridgman View PostIn many ways running the fglrx 3D userspace driver over the open source kernel driver would be less work *and* more useful. Even that would be a *lot* of work, however, since the memory management abstractions are quite different.
Comment
Comment