Announcement

Collapse
No announcement yet.

Lots Of X.Org GLAMOR Improvements Planned

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Lots Of X.Org GLAMOR Improvements Planned

    Phoronix: Lots Of X.Org GLAMOR Improvements Planned

    Eric Anholt at Intel has been devoting much time recently to cleaning up and improving GLAMOR to make it possible to have fast and reliable 2D acceleration via OpenGL within the X.Org world while using a device independent driver...

    http://www.phoronix.com/vr.php?view=MTU2ODI

  • #2
    All the videos
    http://mirror.linux.org.au/linux.conf.au/2014/

    Apparently none about Wayland.

    Comment


    • #3
      I think the idea is to use Glamor on all hardware with XWayland, which explains why an Intel guy is interested in working on it.

      Comment


      • #4
        adding X-Video overlay support
        Glamor already has working XV support (albeit its teary), "X-Video overlay" means something else?

        On r600g glamor is perfectly stable and performance wise is just like exa. It has some rough edges such as transparent gtk2 tray icons if no compositor is used (on xfce at least) and some graphic corruption (changed letters here and there) on Firefox/Seamonkey (the original letters come back if the area in question is refreshed by selection or if its alink, by hovering over it) and the hovered menu item text in Libreoffice has the same color as the selector. All in all, excepting the gtk2 icon issue the problems are hard to notice.

        Comment


        • #5
          MP4?

          What the heck is Linux.Conf.Au 2014 doing posting MP4 video files with H264 and AAC? I'd love to look at some of them, but I have no idea what these obscure proprietary file formats are...

          Comment


          • #6
            Originally posted by OneTimeShot View Post
            What the heck is Linux.Conf.Au 2014 doing posting MP4 video files with H264 and AAC? I'd love to look at some of them, but I have no idea what these obscure proprietary file formats are...
            That's what happens when you live under a rock.

            Comment


            • #7
              Originally posted by mark45 View Post
              That's what happens when you live under a rock.
              You mean Linux.Conf.Au, right?

              Comment


              • #8
                I'm curious, do the SI cards actually have dedicated 2d coprocessors that are used in the Catalyst driver?

                Comment


                • #9
                  Originally posted by zanny View Post
                  I'm curious, do the SI cards actually have dedicated 2d coprocessors that are used in the Catalyst driver?
                  No, AFAIK the last AMD card to have a 2d unit was r500. I presume you meant 2d accel and not video accel.

                  Comment


                  • #10
                    Originally posted by curaga View Post
                    No, AFAIK the last AMD card to have a 2d unit was r500. I presume you meant 2d accel and not video accel.
                    That is what I meant. Good to know, so the only real downside to glamor is that it is implemented in terms of opengl rather than in terms of native gpu primitives?

                    Comment


                    • #11
                      Originally posted by zanny View Post
                      That is what I meant. Good to know, so the only real downside to glamor is that it is implemented in terms of opengl rather than in terms of native gpu primitives?
                      Yes 12345

                      Comment


                      • #12
                        How would this ever match the performance or power budget of a properly implemented 2D engine and hw overlays?

                        All people need to do is implement proper buffer synchronization (why was this only an afterthought with dma-buf?), and to actually implement the simple driver code for the fixed function dedicated engines...

                        Comment


                        • #13
                          Originally posted by libv View Post
                          How would this ever match the performance or power budget of a properly implemented 2D engine and hw overlays?

                          All people need to do is implement proper buffer synchronization (why was this only an afterthought with dma-buf?), and to actually implement the simple driver code for the fixed function dedicated engines...
                          I feel like i'm falling into a trick question here, because you should know this.

                          Modern hardware doesn't have that. They run the 2D code on the 3D engine anyway, so if you can run it in an existing API/driver codebase with good enough performance why waste development time reinventing the wheel?

                          Of course, it hasn't yet been proven that OpenGL provides that good enough performance, but if it can there's no reason not to use it.


                          General code will never reach 100% possible speed, but you could use that same argument to say that Mesa itself shouldn't be used and every driver should create a 3D driver from scratch with no shared codebase at all. At some point, you hit a level of diminishing returns and it doesn't make sense to spend hundreds of hours just to save an extra clock cycle here and there.
                          Last edited by smitty3268; 01-18-2014, 04:28 AM.

                          Comment


                          • #14
                            Originally posted by smitty3268 View Post
                            I feel like i'm falling into a trick question here, because you should know this.

                            Modern hardware doesn't have that. They run the 2D code on the 3D engine anyway, so if you can run it in an existing API/driver codebase with good enough performance why waste development time reinventing the wheel?

                            Of course, it hasn't yet been proven that OpenGL provides that good enough performance, but if it can there's no reason not to use it.


                            General code will never reach 100% possible speed, but you could use that same argument to say that Mesa itself shouldn't be used and every driver should create a 3D driver from scratch with no shared codebase at all. At some point, you hit a level of diminishing returns and it doesn't make sense to spend hundreds of hours just to save an extra clock cycle here and there.
                            Yes, this was a rethorical question. I know the answer: laziness. Just like with megadrivers... Simply too lazy to fix up mesa namespace...

                            Ask yourself: Why does android have hwcomposer, and why did wayland suddenly grow that a bit later on (with the hastily implemented kms planes to boot)...

                            A 3D engine is huge and quite powerful, and very very versatile (these days). This means that setting it up for a simple operation is going to waste a lot of CPU cycles. Then there is the powerdraw of a 3D engine which is constantly running for tiny little things, and the power of the wasted cpu cycles...

                            A 2D engine is usually nothing more than something which takes 2 buffers (with their info) and does a simple operation to them and outputs them to another buffer. How much setup does that require, d'you think? How efficient will that silicon be? All you have to do is make sure that the buffers are synchronized, heck, you get nice interrupts for that.

                            HW overlay, again, is hw specifically designed for this task. All you have to do is power it up, point it to the buffers, and then tell it where to display the buffers, and it'll send nice interrupts when it is displaying a new buffer. Again, much much more efficient.

                            All one has to do is stop being lazy and shortsighted.

                            Comment


                            • #15
                              Originally posted by libv View Post
                              All one has to do is stop being lazy and shortsighted.
                              Well, in that case you really shouldn't be using X anywhere that you want highly optimized and low power requirements anyway. Just switch to wayland and avoid glamor entirely, because using X is going to bring a ton of those things.

                              Or be lazy and shortsighted, and keep using X.

                              Comment

                              Working...
                              X