Announcement

Collapse
No announcement yet.

X.Org 7.5 Released. Wait, Nope!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by BlackStar View Post
    KMS is a great first step. I would also like to see input handling moved to the kernel (the kernel APIs are way more sane here!) I don't actually think that will happen (even with XInput2 looking dead in the water), but it would remove a large burden from the Xorg developers. Finally, I would like to see a controlled deprecation and rewrite of the worse and/or duplicated parts of the API.
    Xinput2 is dead in the water? Really?

    Originally posted by BlackStar View Post
    Edit
    Question: what is the point of XRender when we have OpenGL? No, really, why not simply design a render and compositing API that uses OpenGL underneath? Is there really something that XRender can do that cannot be done directly or indirectly with OpenGL?
    XRender dates from 2000 and XFree86 4.0.1. This was roughly when the DRI was implemented. They serve different purposes.

    It's only been recently that the ability to move everything to 3D rendering has really been feasible.
    Last edited by mattst88; 01 April 2009, 07:24 PM.

    Comment


    • #32
      Originally posted by BlackStar View Post
      Question: what is the point of XRender when we have OpenGL? No, really, why not simply design a render and compositing API that uses OpenGL underneath? Is there really something that XRender can do that cannot be done directly or indirectly with OpenGL?
      Not all graphics cards have openGL support (in hardware or via their drivers). I am all for accelerating the entire desktop via openGL. It would certainly improve the quality of both the desktop and the drivers.

      F

      Comment


      • #33
        Yeah, I think that was one of the attractions of something like XRender.

        On a high end GPU you end up using the 3D engine for everything (in which case it's not *that* much harder to write a full GL driver), but on some chips you might have a fast alpha-blending 2D engine and a slow, undocumented or missing 3D engine.
        Test signature

        Comment


        • #34
          As bridgman said, XRender/EXA can be done on chips without 3D engines. It's not the best API, but it's a decent bottom line and is going to be faster than software OpenGL. Also setup on the app side is much cheaper for XRender compared to OpenGL.

          If people want to rewrite or replace X, go ahead. Don't expect any support until you can show people why you've come up with something better than what we've got in place already. Also please check to make sure your complaint isn't on the X12 wishlist already.

          If you want to bitch about driver support, instead consider learning C and fixing your drivers. That's how I got into Xorg work, and if I can do it, anybody can do it.

          ~ C.

          Comment


          • #35
            Xinput2 is dead in the water? Really?
            Not dead then (phew!), but consider this: wasn't the *final* version supposed to be here already? Yet all we have is an early alpha! I mean no disrespect to the devs and I know there are good reasons for the delay, but this is indicative of an ailing project.

            As bridgman said, XRender/EXA can be done on chips without 3D engines. It's not the best API, but it's a decent bottom line and is going to be faster than software OpenGL. Also setup on the app side is much cheaper for XRender compared to OpenGL.
            I see. I was under the impression that we left 2D-only chips back in the era of the Matrox Mystique and the Voodoo 1 - are people seriously building chips with no 3d engine at this age and time? Even intel claims D3D10 and OpenGL 2.1 support in their recent hardware!

            I get it that a solid 2d engine will be faster than a slow 3d engine. What I am getting at is, why not accelerate XRender using the 3d engine by default and only fall back to software/2d acceleration on specific chips? The alternative (separate EXA and OpenGL acceleration) seems rather inefficient, both wrt development resources and wrt utilization of modern hardware.

            Another question: what does fglrx use to accelerate 2d? I've read that it doesn't support EXA. Is it XAA or is it something else entirely (where does Textured2D fit in this?)

            Comment


            • #36
              Originally posted by BlackStar View Post
              I see. I was under the impression that we left 2D-only chips back in the era of the Matrox Mystique and the Voodoo 1 - are people seriously building chips with no 3d engine at this age and time? Even intel claims D3D10 and OpenGL 2.1 support in their recent hardware!

              I get it that a solid 2d engine will be faster than a slow 3d engine. What I am getting at is, why not accelerate XRender using the 3d engine by default and only fall back to software/2d acceleration on specific chips? The alternative (separate EXA and OpenGL acceleration) seems rather inefficient, both wrt development resources and wrt utilization of modern hardware.
              Hence Gallium.

              Comment


              • #37
                Originally posted by BlackStar View Post
                I see. I was under the impression that we left 2D-only chips back in the era of the Matrox Mystique and the Voodoo 1 - are people seriously building chips with no 3d engine at this age and time? Even intel claims D3D10 and OpenGL 2.1 support in their recent hardware!
                No, you missed the point here.

                Consider having documentation on the 2D-part of a graphics card, but lacking all 3D documentation. Think of R3,4,500 a few years ago.

                In this situation, hardware OpenGL isn't an option, but XRender is. Hence, XRender is used.

                Comment


                • #38
                  Originally posted by BlackStar View Post
                  What I am getting at is, why not accelerate XRender using the 3d engine by default and only fall back to software/2d acceleration on specific chips? The alternative (separate EXA and OpenGL acceleration) seems rather inefficient, both wrt development resources and wrt utilization of modern hardware.
                  I don't think anyone is pushing back on running XRender over OpenGL, it's just that XRender is a lot easier to implement so on any new hardware you tend to have XRender implemented and in use long before you have OpenGL. The R6xx/7xx situation is a pretty good example -- EXA with XRender support has been running for a couple of months already.

                  The second point is that running a simple API over a complex API tends to be great for experimenting but tends not to give you optimal performance. XRender over bare hardware, or over Gallium3D, is likely to outperform XRender over OpenGL.

                  Where things get interesting, though, is when you start looking at higher level APIs (eg the GUI toolkits, or Cairo etc..). Maybe I'm missing something, but it seems like every few years someone implements Cairo over OpenGL and is very pleased with the results. Here's one that seems to be from 2004 :

                  http://lists.freedesktop.org/archive...ch/001061.html

                  Anyways, the key points here are :

                  1. Nobody is pushing back on using OpenGL, just on blanket statements that the ONLY implementation should be over OpenGL.

                  2. The primary reason for having non-OpenGL implementations of XRender is that by the time a new chip has OpenGL support it has normally had XRender support for a few months

                  3. I have not seen test results, but my guess is that if you *did* compare XRender-over-OpenGL against XRender-over-raw-hardware that there would be a non-trivial performance penalty for using OpenGL. That's not a problem with OpenGL itself, just that XRender is a fairly simple function-at-a-time API while OpenGL is at its best drawing a bunch of things at once. That's why higher level APIs like Cairo seem like a better fit.

                  The reason people keep babbling about Gallium3D is that it shows promise as an API which can make good use of 3D engine hardware but with considerably less overhead than OpenGL when "drawing one thing at a time".

                  Originally posted by BlackStar View Post
                  Another question: what does fglrx use to accelerate 2d? I've read that it doesn't support EXA. Is it XAA or is it something else entirely (where does Textured2D fit in this?)
                  Not exactly sure, AFAIK it's normally XAA API, using the 2D engine on pre-6xx parts, presumably using 3D engine on 6xx and higher.
                  Last edited by bridgman; 01 April 2009, 10:05 PM.
                  Test signature

                  Comment


                  • #39
                    Originally posted by MostAwesomeDude View Post
                    As bridgman said, XRender/EXA can be done on chips without 3D engines. It's not the best API, but it's a decent bottom line and is going to be faster than software OpenGL. Also setup on the app side is much cheaper for XRender compared to OpenGL.

                    If people want to rewrite or replace X, go ahead. Don't expect any support until you can show people why you've come up with something better than what we've got in place already. Also please check to make sure your complaint isn't on the X12 wishlist already.

                    If you want to bitch about driver support, instead consider learning C and fixing your drivers. That's how I got into Xorg work, and if I can do it, anybody can do it.

                    ~ C.
                    Amen to that.
                    Xorg is in great shape when looking at the ratio of code written to the amount of available devs.

                    Comment


                    • #40
                      I know some feel that this sucks but why can't the X.org start dropping features and drivers which doesn't have a maintainer, are old or the userbase is low? Why the lack on maintainers and need for support on some part have to drag the whole system down?

                      I think that good support for old hardware and features is beneficial to X.org and open source in general, but there is a trade of and at one point the old will become a burden. If there is small to medium size userbase to some feature and no maintainer then put it under risk of beeing drop away. If some company (or user) depends on that feature they will support it or work around it. Yes it is not nice but it is better than to let the whole system come to halt.

                      To show that I stand where my mouth is, if one of features I need is dropped I feel that it's sucks, but I'm not in the position to demand others to do the work for me. I would use some other method to get around the feature, support it my self or buy new hardware.

                      -Antti

                      Comment

                      Working...
                      X