Announcement

Collapse
No announcement yet.

GLAMOR'ized Radeon Driver Shows Hope Over EXA

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by brosis View Post
    My understanding GLAMOR is simply a must for AMD GCN, because there is no such thing as 2D engine in hardware anymore.
    The 2D engine was removed with R600, but that's not the problem. Using the 3D engine for RENDER is quite a lot of work, and you end up duplicating quite a bit of code found in the 3D/OpenGL driver. That is why glamor makes some sense, especially if there is a shortage of manpower.

    Comment


    • #22
      Originally posted by brent View Post
      The 2D engine was removed with R600, but that's not the problem. Using the 3D engine for RENDER is quite a lot of work, and you end up duplicating quite a bit of code found in the 3D/OpenGL driver. That is why glamor makes some sense, especially if there is a shortage of manpower.
      Agreed.

      In fact, 2D is fast enough (TM) on modern hardware as it is. Decoding is handled by VDPAU nowadays, and as long as video playback is fast and tear-free, I don't think that anyone notices 2D performance nowadays.

      If going over to GLAMOR as a thin layer over GL can drastically simplify the driver without hurting performance, then it's the logical thing to do.

      Comment


      • #23
        Originally posted by agd5f View Post
        The problem is X's 2D sematics don't map well to hardware. E.g., you can't just accelerate X lines with GL lines, you need to do a bunch of complex munging in the shaders to make the semantics match. Adding back a 2D engine doesn't really help. Most modern apps use RENDER features which require a 3D engine but also have semantics that don't match most 3D hardware. So you still have to do munging in the shaders to match semantics.

        So you basically have two choices:
        1. Implement a native 2D acceleration architecture for your chips. This ends up being as complex or more complex than the 3D driver (Intel's ddx is bigger than it's 3D driver, IIRC).
        2. Implement 2D in terms of a 3D API. Something like glamor or XA.
        Would Wayland and/or Mir be better for this type of problem?

        Comment


        • #24
          Originally posted by Rallos Zek View Post
          Would Wayland and/or Mir be better for this type of problem?
          wayland and mir don't provide a rendering API. It's up to the apps to use an existing rendering API such as OpenGL.

          Comment


          • #25
            Originally posted by Rallos Zek View Post
            Would Wayland and/or Mir be better for this type of problem?
            I imagine they would be pretty much the same as long as you weren't running X over them. That said, AFAIK a lot of X apps can use OpenGL rather than Render already.
            Test signature

            Comment


            • #26
              Originally posted by Rallos Zek View Post
              Would Wayland and/or Mir be better for this type of problem?
              Wayland gives surface/texture to application and application uses toolkit (or directly OpenGL) that uses OpenGL to paint that surface. Like a 3D game - which modern chips were developed for anyway.
              In the end, all surfaces are painted by 3D engine of the GPU driver. No protocol wrapping, no overrides, no window in window in window, synced content, steamlined output. Advantages all the way.

              I wonder about those missing network functionality, .. how about socketing OpenGL calls between library and driver over network. This will allow remote rendering, for example, on local machine, from the output on remote machine. And vice versa. Sure, the network traffic might be huge, but thats same with Xorg using modern load. Hm?

              Comment


              • #27
                Originally posted by brosis View Post
                Also, modern GPUs found on desktops have 2D engine removed.
                Maybe Intel chips still have 2D engine, which may have sense for integrated solutions and low-resolution graphics, but not for customer market.

                My understanding GLAMOR is simply a must for AMD GCN, because there is no such thing as 2D engine in hardware anymore.
                So, one can't really compare SNA with GLAMOR, its like comparing two-wheel bike versus four-wheel car - same result, but completely different config resulting in different dynamics, requiring completely different driving approach..

                Also, would be interesting how it will compare on Wayland. Since Wayland just gives texture and welcomes application to write using OpenGL or toolkits - each one calling driver capabilities; it should be easy to get FPS when benchmarking real-life generic tasks.

                Xorg has a lot of overhead, a lot of extensions and weird window system. For example, check youtube-swfdec result, this is copying bunch of pixels on screen. Both Fglrx and EXA perform worse than software, meaning under Xorg it more efficient just to copy buffers via CPU than to call any 3D engine and do the swap via GPU. I mean, modern desktop is not drawing lines with text (which would be easily wrappable via X), but operating on lots of pixels, using composite, OpenGL features (shading etc). With X's wrapping everything around, the advantage of (outsourcing with) hardware acceleration is pretty much killed by the slow postman. Giving each application a texture and using toolkits to expand into OpenGL calls or using it directly, will result in zero packaging. Xorg by design was made for CPU-only networked line and text UI. This use case is hardly exists now (outside of remote server management). Its obvious what to prioritize for the future. So, the big question - do we game or do we remotely admin on our machines using GTK1-like toolkits more in the future?

                The only thing that surprises me is Intel itself, still putting strain on acceleration via 2D, whilst being authors of (3D based) Wayland..

                "The professional driver" from nvidia doesn't even support Wayland in the first place and overrides half of xorg. Proves two points - how xorg is incapable to address current use case of professional environiment, and how incooperative nvidia is in designing anything next-technology, but outside of own property. More you support nvidia, more are you doomed to have it as single possible choice. Its like windows,.. but green?



                Why? No UVD?
                yeah i use uvd in both for h264 8bits content and Xv is nice to have for 10bit h264 anime, my point was im using glamor everywhere and i can't reproduce that bug the OP mentioned

                Comment


                • #28
                  Originally posted by agd5f View Post
                  wayland and mir don't provide a rendering API. It's up to the apps to use an existing rendering API such as OpenGL.
                  One quick question...didn't 3D chips had limitation on the simultaneous 3D contexts? How can each Wayland application use OpenGL for its drawings, and not stepping on others toes?

                  Comment


                  • #29
                    Originally posted by Drago View Post
                    One quick question...didn't 3D chips had limitation on the simultaneous 3D contexts? How can each Wayland application use OpenGL for its drawings, and not stepping on others toes?
                    There hasn't been such limits in a decade or so?

                    Comment


                    • #30
                      Originally posted by curaga View Post
                      There hasn't been such limits in a decade or so?
                      Pretty much -- there are still hardware limits (for example our GPUs can only have a finite number of page tables simultaneously active) but in most cases the drivers take care of multiplexing those resources across clients when necessary.
                      Test signature

                      Comment

                      Working...
                      X