Announcement

Collapse
No announcement yet.

X.Org 7.5 Released. Wait, Nope!

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by BlackStar View Post
    KMS is a great first step. I would also like to see input handling moved to the kernel (the kernel APIs are way more sane here!) I don't actually think that will happen (even with XInput2 looking dead in the water), but it would remove a large burden from the Xorg developers. Finally, I would like to see a controlled deprecation and rewrite of the worse and/or duplicated parts of the API.
    Xinput2 is dead in the water? Really?

    Originally posted by BlackStar View Post
    Edit
    Question: what is the point of XRender when we have OpenGL? No, really, why not simply design a render and compositing API that uses OpenGL underneath? Is there really something that XRender can do that cannot be done directly or indirectly with OpenGL?
    XRender dates from 2000 and XFree86 4.0.1. This was roughly when the DRI was implemented. They serve different purposes.

    It's only been recently that the ability to move everything to 3D rendering has really been feasible.
    Last edited by mattst88; 04-01-2009, 07:24 PM.

    Comment


    • #32
      Originally posted by BlackStar View Post
      Question: what is the point of XRender when we have OpenGL? No, really, why not simply design a render and compositing API that uses OpenGL underneath? Is there really something that XRender can do that cannot be done directly or indirectly with OpenGL?
      Not all graphics cards have openGL support (in hardware or via their drivers). I am all for accelerating the entire desktop via openGL. It would certainly improve the quality of both the desktop and the drivers.

      F

      Comment


      • #33
        Yeah, I think that was one of the attractions of something like XRender.

        On a high end GPU you end up using the 3D engine for everything (in which case it's not *that* much harder to write a full GL driver), but on some chips you might have a fast alpha-blending 2D engine and a slow, undocumented or missing 3D engine.

        Comment


        • #34
          As bridgman said, XRender/EXA can be done on chips without 3D engines. It's not the best API, but it's a decent bottom line and is going to be faster than software OpenGL. Also setup on the app side is much cheaper for XRender compared to OpenGL.

          If people want to rewrite or replace X, go ahead. Don't expect any support until you can show people why you've come up with something better than what we've got in place already. Also please check to make sure your complaint isn't on the X12 wishlist already.

          If you want to bitch about driver support, instead consider learning C and fixing your drivers. That's how I got into Xorg work, and if I can do it, anybody can do it.

          ~ C.

          Comment


          • #35
            Xinput2 is dead in the water? Really?
            Not dead then (phew!), but consider this: wasn't the *final* version supposed to be here already? Yet all we have is an early alpha! I mean no disrespect to the devs and I know there are good reasons for the delay, but this is indicative of an ailing project.

            As bridgman said, XRender/EXA can be done on chips without 3D engines. It's not the best API, but it's a decent bottom line and is going to be faster than software OpenGL. Also setup on the app side is much cheaper for XRender compared to OpenGL.
            I see. I was under the impression that we left 2D-only chips back in the era of the Matrox Mystique and the Voodoo 1 - are people seriously building chips with no 3d engine at this age and time? Even intel claims D3D10 and OpenGL 2.1 support in their recent hardware!

            I get it that a solid 2d engine will be faster than a slow 3d engine. What I am getting at is, why not accelerate XRender using the 3d engine by default and only fall back to software/2d acceleration on specific chips? The alternative (separate EXA and OpenGL acceleration) seems rather inefficient, both wrt development resources and wrt utilization of modern hardware.

            Another question: what does fglrx use to accelerate 2d? I've read that it doesn't support EXA. Is it XAA or is it something else entirely (where does Textured2D fit in this?)

            Comment


            • #36
              Originally posted by BlackStar View Post
              I see. I was under the impression that we left 2D-only chips back in the era of the Matrox Mystique and the Voodoo 1 - are people seriously building chips with no 3d engine at this age and time? Even intel claims D3D10 and OpenGL 2.1 support in their recent hardware!

              I get it that a solid 2d engine will be faster than a slow 3d engine. What I am getting at is, why not accelerate XRender using the 3d engine by default and only fall back to software/2d acceleration on specific chips? The alternative (separate EXA and OpenGL acceleration) seems rather inefficient, both wrt development resources and wrt utilization of modern hardware.
              Hence Gallium.

              Comment


              • #37
                Originally posted by BlackStar View Post
                I see. I was under the impression that we left 2D-only chips back in the era of the Matrox Mystique and the Voodoo 1 - are people seriously building chips with no 3d engine at this age and time? Even intel claims D3D10 and OpenGL 2.1 support in their recent hardware!
                No, you missed the point here.

                Consider having documentation on the 2D-part of a graphics card, but lacking all 3D documentation. Think of R3,4,500 a few years ago.

                In this situation, hardware OpenGL isn't an option, but XRender is. Hence, XRender is used.

                Comment


                • #38
                  Originally posted by BlackStar View Post
                  What I am getting at is, why not accelerate XRender using the 3d engine by default and only fall back to software/2d acceleration on specific chips? The alternative (separate EXA and OpenGL acceleration) seems rather inefficient, both wrt development resources and wrt utilization of modern hardware.
                  I don't think anyone is pushing back on running XRender over OpenGL, it's just that XRender is a lot easier to implement so on any new hardware you tend to have XRender implemented and in use long before you have OpenGL. The R6xx/7xx situation is a pretty good example -- EXA with XRender support has been running for a couple of months already.

                  The second point is that running a simple API over a complex API tends to be great for experimenting but tends not to give you optimal performance. XRender over bare hardware, or over Gallium3D, is likely to outperform XRender over OpenGL.

                  Where things get interesting, though, is when you start looking at higher level APIs (eg the GUI toolkits, or Cairo etc..). Maybe I'm missing something, but it seems like every few years someone implements Cairo over OpenGL and is very pleased with the results. Here's one that seems to be from 2004 :

                  http://lists.freedesktop.org/archive...ch/001061.html

                  Anyways, the key points here are :

                  1. Nobody is pushing back on using OpenGL, just on blanket statements that the ONLY implementation should be over OpenGL.

                  2. The primary reason for having non-OpenGL implementations of XRender is that by the time a new chip has OpenGL support it has normally had XRender support for a few months

                  3. I have not seen test results, but my guess is that if you *did* compare XRender-over-OpenGL against XRender-over-raw-hardware that there would be a non-trivial performance penalty for using OpenGL. That's not a problem with OpenGL itself, just that XRender is a fairly simple function-at-a-time API while OpenGL is at its best drawing a bunch of things at once. That's why higher level APIs like Cairo seem like a better fit.

                  The reason people keep babbling about Gallium3D is that it shows promise as an API which can make good use of 3D engine hardware but with considerably less overhead than OpenGL when "drawing one thing at a time".

                  Originally posted by BlackStar View Post
                  Another question: what does fglrx use to accelerate 2d? I've read that it doesn't support EXA. Is it XAA or is it something else entirely (where does Textured2D fit in this?)
                  Not exactly sure, AFAIK it's normally XAA API, using the 2D engine on pre-6xx parts, presumably using 3D engine on 6xx and higher.
                  Last edited by bridgman; 04-01-2009, 10:05 PM.

                  Comment


                  • #39
                    Originally posted by MostAwesomeDude View Post
                    As bridgman said, XRender/EXA can be done on chips without 3D engines. It's not the best API, but it's a decent bottom line and is going to be faster than software OpenGL. Also setup on the app side is much cheaper for XRender compared to OpenGL.

                    If people want to rewrite or replace X, go ahead. Don't expect any support until you can show people why you've come up with something better than what we've got in place already. Also please check to make sure your complaint isn't on the X12 wishlist already.

                    If you want to bitch about driver support, instead consider learning C and fixing your drivers. That's how I got into Xorg work, and if I can do it, anybody can do it.

                    ~ C.
                    Amen to that.
                    Xorg is in great shape when looking at the ratio of code written to the amount of available devs.

                    Comment


                    • #40
                      I know some feel that this sucks but why can't the X.org start dropping features and drivers which doesn't have a maintainer, are old or the userbase is low? Why the lack on maintainers and need for support on some part have to drag the whole system down?

                      I think that good support for old hardware and features is beneficial to X.org and open source in general, but there is a trade of and at one point the old will become a burden. If there is small to medium size userbase to some feature and no maintainer then put it under risk of beeing drop away. If some company (or user) depends on that feature they will support it or work around it. Yes it is not nice but it is better than to let the whole system come to halt.

                      To show that I stand where my mouth is, if one of features I need is dropped I feel that it's sucks, but I'm not in the position to demand others to do the work for me. I would use some other method to get around the feature, support it my self or buy new hardware.

                      -Antti

                      Comment


                      • #41
                        3. I have not seen test results, but my guess is that if you *did* compare XRender-over-OpenGL against XRender-over-raw-hardware that there would be a non-trivial performance penalty for using OpenGL. That's not a problem with OpenGL itself, just that XRender is a fairly simple function-at-a-time API while OpenGL is at its best drawing a bunch of things at once. That's why higher level APIs like Cairo seem like a better fit.
                        Well, not exactly the same comparison, but there is a Qt4.5 benchmark that compares lastest Qt rendering using xrender, raster (made by them), and OpenGL. Raster seems 2x faster then xrender, and OpenGL seems something like 5x faster:
                        http://labs.trolltech.com/blogs/2008...-for-the-blit/

                        Comment


                        • #42
                          From the link above:
                          Lubos: the raster graphicssystem doesn’t use any accelerated rendering, everything is done in software. Remarkably, this is usually faster than using XRender when it comes to pixmap transforms and gradients for example.
                          Mac OS X (PowerBook, Intel Core 2 Duo, 2.4 GHz, 4 GB Ram, NVidia GeForce 8600 GM)
                          Native: 9 Fps
                          Raster: 30 Fps
                          OpenGL: 215 Fp
                          X11 (Intel Pentium 4, 3 GHz, 1 Gb Ram, Nvidia GeForce 6600)
                          Native: 20 Fps
                          Raster: 36 Fps
                          OpenGL: 92 Fps
                          Obviously this is a single benchmark, but a) XRender on a P4 is twice as fast as (Quartz? Carbon?) Mac OS X on a Core2 (and we were saying that X11 was slow?) and b) the software renderer is twice as fast as XRender (ouch) and OpenGL is about 4.5 times faster (double ouch).

                          I'd love to see this test repeated on intel and radeon/radeonhd.

                          Comment


                          • #43
                            Thanks, that's one of the articles I was trying to find last night.

                            Those tests seem shine a light on something different -- operations which are not accelerated by XRender today. The article doesn't say, but I imagine "native" on an X11 system refers to using XRender and whatever acceleration API is supporting XRender (XAA or EXA, I imagine).

                            The raster backend at least gives shadowfb-like acceleration for those ops while still being software rendered, while the OpenGL implementation hardware-accelerates most of the functions.

                            It's not "XRender-over-OpenGL", AFAICS, it's "a higher level API over OpenGL".

                            Comment


                            • #44
                              Originally posted by BlackStar View Post
                              From the link above:



                              Obviously this is a single benchmark, but a) XRender on a P4 is twice as fast as (Quartz? Carbon?) Mac OS X on a Core2 (and we were saying that X11 was slow?) and b) the software renderer is twice as fast as XRender (ouch) and OpenGL is about 4.5 times faster (double ouch).

                              I'd love to see this test repeated on intel and radeon/radeonhd.
                              Notice also that native Windows run at 60fps, 3x faster then X11, so at least you can't say X11 is fast.. About OSX benchmark, what I notice from using it, is that everything that's made on non-Apple SDKs looks awful on it (maybe that's because they are such as*ho*es with third party developers), but apps made on Cocoa generally looks good, runs fast and have nice and smooth animations.

                              Comment


                              • #45
                                Found this informative comment regarding OSX performance:

                                This is correct. Our software backend for QPainter is faster than our CoreGraphics backend for most things. The CoreGraphics engine is faster than the raster engine when drawing large areas, because these operation are H/W accellerated by CoreGraphics, but in general our software engine beats native rendering on Mac OS X.

                                There is one architectural clash that causes some performance problems on the CoreGraphics backend, and that is state handling. CoreGraphics uses the PDF / PostScript model where you can only intersect a new clip with current clip and only multiply a new transformation with the existing one. Neither of the two states can be reset, only saved / restored. QPainter allows setting these states (and you can argue if this is wise or not, but it feels practical and its the way QPainter works so we don’t want to change it) regardless of what they previously was which means we need to do some nasty save/restore-stack handling on the CoreGraphics side, which is not fortunate performance wise…

                                Then there is the problem that CoreGraphics has a fixed overhead on all drawing operations. The fastest I’ve gotten is some 100.000 plain rectangles pr second (small ones, 44, 88, etc), while our software engine can do 10x that and style code and general widget code contains a lot of these small primitives, so the cost accumulates.

                                The benchmark does only repaints one widget, while an application is typically contains multiple widgets and the repaint / flush-to-screen logic is not optimal for Mac OS X at the moment, so an app like Designer won’t run any faster with -graphicssystem raster. We hope to be able to spend more time on those things in the coming months to iron out these things and make it really shine.

                                Comment

                                Working...
                                X