Announcement

Collapse
No announcement yet.

Port fglrx openGL stack to Gallium3D

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • cobalt
    started a topic Port fglrx openGL stack to Gallium3D

    Port fglrx openGL stack to Gallium3D

    With OSS ATI drivers moving to the Gallium3D architecture in the medium/long term, could the fglrx openGL/openCL stack be ported over to become a Gallium3D state tracker and released as a BLOB?

    While this would certainly require a large initial cost in time and resources, in the long run would have a lot of benefits. Firstly it would help reduce the massive duplication of code in having two drivers. It would also allow for the proprietary code to have a stable ABI to work with as oppose to constantly chasing the kernel and xorg unstable ABIs. This in turn could also enable AMD to concentrate development on the OSS drivers. In addition, having a advanced openGL state tracker would be a big benefit to other gallium drivers such as nouveau.

    So what do you think? Could we build the cathedral on top of the bazaar?

  • droidhacker
    replied
    Originally posted by nanonyme View Post
    From a completely theoretical point of view, sounds intriguing. From a realistic point of view, didn't some vendor already try the "we do opensource DRM and closed userspace 3D libraries" which ended them not getting their DRM code in Linux kernel at all?
    I thought that the big problem was that the closed stuff was the ONLY use for the DRM, and that was why it wasn't accepted.

    With AMD, sharing the DRM between both the OPEN as well as the CLOSED userspace stuff would eliminate this issue. Especially since the DRM is *already in kernel*.

    Nobody said you couldn't use it for BOTH. Just that kernel stuff won't be accepted if it will ONLY be used for proprietary closed blobs.

    Leave a comment:


  • nanonyme
    replied
    Originally posted by bridgman View Post
    In many ways running the fglrx 3D userspace driver over the open source kernel driver would be less work *and* more useful. Even that would be a *lot* of work, however, since the memory management abstractions are quite different.
    From a completely theoretical point of view, sounds intriguing. From a realistic point of view, didn't some vendor already try the "we do opensource DRM and closed userspace 3D libraries" which ended them not getting their DRM code in Linux kernel at all?

    Leave a comment:


  • droidhacker
    replied
    And the thing about it is this;
    A LOT of fglrx is NOT NEEDED (strictly speaking). The open source xorg drivers are good for most things everyone does (in fact, typically BETTER than fglrx), so the second component to the OP's dream involves cutting all the parts that the OP perceives as REDUNDANT, leaving the 3D acceleration components to stick in via G3D/mesa. THIS is the part of his request that would be really tough to implement.

    Leave a comment:


  • droidhacker
    replied
    Originally posted by Svartalf View Post
    The big problem for them would be that it's more of a moving target than the way they're doing things right now. The main reason that the FOSS driver works as well as it does is that it's in lock-step with the Gallium3D API edge because it's part and parcel of that project. For them, it's a fairly extensive re-write for the parts that are breaking like you state- only to get to an edge that does the same thing on them with the same level of regularity right at the moment.
    I never said it was a good idea. I was simply rephrasing what the OP was asking for in a manner that makes a little more sense.

    And FYI: I don't agree with you.
    The KERNEL end of fglrx works the way the OP suggested. Its mainly the xserver end that breaks. Sure the changing kernel can break fglrx, but fglrx comes with the SOURCE CODE for the kernel interface, so that can be fixed by the community to a certain extend. What is needed is a similar open source INTERFACE for the xserver.

    Current:
    kernel -- open source kernel interface -- fglrx
    xserver -- fglrx

    Wanted:
    kernel -- open source kernel interface -- fglrx
    xserver -- open source xserver interface -- fglrx

    Leave a comment:


  • jrch2k8
    replied
    Originally posted by haplo602 View Post
    Why bother bridgman ? I know you are trying to be nice here, but the guy has no clue what he's talking about .... he's comparing DX11 to OpenGL4 on cards that are capable of neither (hint ... HD4850x2).

    Also I do not think his Quadfire setup has any benefits over a single HD4850x2 since the CPU will be most likely not able to feed the cards properly. Yet he's expeting the holy grail and more ...
    well at first i fixed it and i say later opengl 3.2/3.3 cuz i tested in directx 10.1 which my cards support(i missed the ultra short edit time, so it stayed as dx11 but is dx10.1). beside OpenGL is an incremental API opengl3.x only miss support for certain hardware specific acceleration like tessalation wich is only present in dx11 capable hardware but unigine just run fine with dx9 and 10 and OpenGL 3.x just without tessalation and other features in this hardware, but in any way opengl4 is incompatible or anything with opengl3.3. so testing with a non dx11 capable is not an issue since you still have to use most of the Opengl implementation wich is common to both api version, so that could give you an idea of the general performance of the gl implementation in the driver.

    about the quafire im aware it doesnt scale well, not at least in low resolution but my main purpose of this quadfire is to have something powerful to play with opencl calculation, so is not like im specting 500fps in COD mw2 or anything like that, is just when i tested the driver i was lazy to open my case and remove the second card. either way having the second card shoudlnt kill the performance but i agree that is not an impossible either. no for now my linux is too bleeding edge for fglrx so i have to make a clean install, and for that ill wait for my new disk cuz well im lazy to downgrade my distro. now if someone else have a dual boot system you could do some test with both oses and check if you performance is close or not cuz well i dont drop the possibility that fglrx just dont like X2 cards and this is just an specific case

    Leave a comment:


  • Svartalf
    replied
    Originally posted by droidhacker View Post
    Right now, fglrx breaks whenever the kernel or xorg is changed.
    The kernel part is generally manageable since that interface is open source. The big problem is that fglrx breaks whenever the xserver changes.
    The big problem for them would be that it's more of a moving target than the way they're doing things right now. The main reason that the FOSS driver works as well as it does is that it's in lock-step with the Gallium3D API edge because it's part and parcel of that project. For them, it's a fairly extensive re-write for the parts that are breaking like you state- only to get to an edge that does the same thing on them with the same level of regularity right at the moment.

    Leave a comment:


  • Svartalf
    replied
    Originally posted by BlackStar View Post
    Repeat after me: glxgears is not a benchmark. Don't try to use it as one, because its results are FUCKING INVALID.
    Heh... You forgot to mention they should repeat this over and over, using a blunt object applied to the back of their head whilst saying it until the desire to use glxgears as a benchmark leaves their minds, hopefully for good...

    There, better now?
    I certainly feel better now...

    In fact, fglrx performs identically to the Windows driver in OpenGL (sometimes slightly faster, too). The rest of your points are being addressed as we speak (better 2d acceleration, video acceleration).

    Bah.
    I would hesitate to say this is going to be happening in a timely manner. They've been saying things along these lines for many years now, unfortunately- while only putting a few people when compared to the Windows side of things for years now. It's not being ugly or accusing when I state this- only making a statement of fact. They can't make a business case (yet...though, if what I've been told in confidence actually HAPPENS, that may deeply change for the better by the end of this or middle of next year...) to justify doing massive cleanups on the codebase where they've made incorrect assumptions on the WinSys layer of things to use a Gallium3D term for things so everyone can relate here (Which is actually where many of your bugs are coming from...). Because of this, there's been many, many years of promises without fufilling many of them except many years later.

    It's a good part of where the bitching about fglrx stems from, actually. If you didn't know what was going on and why- you'd be peeved that they couldn't get "simple" things right like fglrx does screw up on- and we won't get into Crossfire, etc. which is their baby and should've been there already in stable form in the minds of the community at large.

    Leave a comment:


  • haplo602
    replied
    Originally posted by bridgman View Post
    jrch2k8, what kind of performance difference (linux vs windows, amd vs nvidia) do you see running with a single GPU rather than a 4-GPU crossfire rig ?
    Why bother bridgman ? I know you are trying to be nice here, but the guy has no clue what he's talking about .... he's comparing DX11 to OpenGL4 on cards that are capable of neither (hint ... HD4850x2).

    Also I do not think his Quadfire setup has any benefits over a single HD4850x2 since the CPU will be most likely not able to feed the cards properly. Yet he's expeting the holy grail and more ...

    Leave a comment:


  • droidhacker
    replied
    @bridgman:

    I think the key interesting idea from this thread is to make some kind of open source thing that sits in between fglrx and kernel/xorg/whatever in order to improve the ability to follow the bleeding edge with fglrx.

    Right now, fglrx breaks whenever the kernel or xorg is changed.
    The kernel part is generally manageable since that interface is open source. The big problem is that fglrx breaks whenever the xserver changes.

    But right now we have an open source X driver that *works*, and follows the xserver. So what these guys want is to be able to take the existing SOLID open source parts and mix in certain chunks (the acceleration chunks) of fglrx in order to come up with an overall driver package that doesn't break every time someone sneezes.

    Which is not a bad idea. Even if it would probably be extremely expensive to implement.

    (note: I'm not interested in this -- I personally am happy with the progress and function of the open source drivers.)

    Leave a comment:

Working...
X