Announcement

Collapse
No announcement yet.

I want to help!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Still on the back burner and probably not worth doing now that fixed function has pretty much disappeared. I started editing the HowVideoCardsWork wiki page instead to include information about fixed function and shader-based GPUs but lost so much work on timeouts that I gave up for the moment

    Will try again later, doing all the work offline and *then* pasting into the page. Fingers crossed.
    Test signature

    Comment


    • #22
      Thanks in advance for your efforts, Bridgman!

      BTW all this documentation, even on how to pull, compile and set up... Is this all in response to the call for X documentation a while back?

      Also, there is the 2D and 3D driver. Is Gallium both, or 'just' the 3D part?

      Comment


      • #23
        I think the call for X documentation was more related to X itself; this is more aimed at the drivers. Strictly speaking only one of the three driver components (the ddx) can be considered part of X anyways, although X does include logic to route 3D operations to an external 3D driver as well.

        The Gallium3D framework is intended to be used by both 2D and 3D drivers, although most people talk about it as a replacement for the current 3D HW driver layer in Mesa. The TG/VMWare folks are building an entire driver stack on top of Gallium3D, although the target hardware in that case is the emulated GPU that their workstation software exposes.

        I guess the best way to think about Gallium3D is as a "hardware layer" for acceleration.
        Test signature

        Comment


        • #24
          Originally posted by V!NCENT View Post
          Also, there is the 2D and 3D driver. Is Gallium both, or 'just' the 3D part?
          For the moment, gallium is concerned with 3D, however iirc there were plans to implement xorg support as a gallium state tracker, which would make it both.

          Comment


          • #25
            Originally posted by bridgman View Post
            Still on the back burner and probably not worth doing now that fixed function has pretty much disappeared. I started editing the HowVideoCardsWork wiki page instead to include information about fixed function and shader-based GPUs but lost so much work on timeouts that I gave up for the moment

            Will try again later, doing all the work offline and *then* pasting into the page. Fingers crossed.
            This would be much appreciated of course!

            Comment


            • #26
              OK. So if I am understanding this correctly (and please correct me if I am wrong):

              The past
              X ran on the CPU only and dumped everything to a framebuffer with a driver.
              Mesa was a software only graphics lib that dumped into X, which X in turn processed to be part of what it dumped to the framebuffer.

              Later on
              X got code to offload work to graphics cards. This being a 2D graphics card driver for hardware acceleration.
              Mesa also got code (hardware abstraction layer (this being drivers)) to offload work to the graphics card, but still had to do this through X.

              Further in time
              DRI can around so Mesa could talk directly to the graphics hardware with drivers to eliminate latency.

              today
              Instead of drivers being part of X/Mesa, Mesa is now part of the driver (state tracker on top of it) and work is underway for X to become a state tracker too (or is able to act like one *ouch headaches*).

              Memory management is now done in the Linux kernel (and soon in *BSDs too) through KMS to eliminate even more latency and reduce duplication in code.


              So now a question too:
              All these other state trackers, like OpenCL and Vector graphics and whatnot... where are those? Are they in Mesa? Or can they be everywhere and just act as a state tracker?

              Comment


              • #27
                As I understand the state of play

                The past:
                In the distant past, graphics hardware was accessed by memory-mapped io through the /dev inode. It could only be accessed by one process at a time, which was generally the X server. The X server drivers used XAA, which was adequate for drawing simple geometry, but lacked the blending options necessary to support anti-aliasing and accelerated compositing.

                Later on, DRI happened as people wanted 3d apps to be able to access the hardware without all the instructions having to go through the x server - this entailed putting arbitration for multiple command streams into the kernel (the DRM). X was responsible for managing memory, and it was kindof limited in how good a job it could do from that position (leading to more swappage than necessary) and for setting up the context for the dri slave (ie keeping it updated on what portions of the screen it was allowed to draw to). Mesa provided drivers using the contexts set up by Xorg in order to accelerate Opengl via this mechanism.

                Later on, the Xrender extension happened and X gained the ability to blend (using porter-duff compositing operations), and EXA came along to support the necessary driver hooks to support this.

                Then people started tinkering with composited desktops and things started sucking. DRI contexts had the rendering slave output to a section of the front buffer, bypassing x, which would look broken if people were making the window that context was in do ridiculous wobbly shenanigans, plus was hardly appropriate for the contexts the window manager would use itself in drawing the fancy desktop. XGL attempted to solve this by starting an X server, creating a window in it, fullscreening that, starting opengl in that context and presenting *itself* as an X server, wrapping the original X drawing commands to opengl, and providing accelerated indirect contexts by a passthrough mechanism. Nvidia were less than thrilled about this, as it meant Xgl would need its passthrough mechanism updated to support any new gl extensions, so AIGLX appeared instead, enabling the Xserver itself to use mesa contexts to provide acceleration for indirect contexts.

                This all lead to exposing faults with DRI1/DRM - the excessive swapping when switching between contexts, and inability to accelerate drawing to redirected surfaces hurt its credibility for a composited desktop.

                ..hence DRI2, which incorporated memory management in the kernel, hopefully reducing swapping, as well as enabled lockless sharing of the accelerator, and introduced the ability to accelerate redirected surfaces.

                In addition to this, mesa is being restructured (gallium) in recognition of the need to go to a more compiler-based architecture as accelerators became more processor-like and opengl usage became more programming-
                like. The thing about compiling is that it's typically done in several stages - languages are parsed into internal intermediate representations, which is then fed to the stage that translates to the particular chip's languages, with optimisation passes in between. This makes it possible to put multiple front-ends on the compiler innards of GPUs with a gallium drivers - these are the "state trackers", which will at first be opengl, but adding an Xorg driver as a state tracker could kill the need for individual chip drivers in X. Other state trackers are being played with, like OpenCL, Direct3d (for WINE), OpenVG(standard 2d graphics accel) etc etc etc.

                Comment


                • #28
                  very interesting/informative post dustman,

                  one thing though

                  Originally posted by DuSTman View Post
                  Other state trackers are being played with... Direct3d (for WINE)...
                  i thought as of latest news D3d state tracker was only being used for windows virtualization and wouldn't be release for linux?

                  Comment


                  • #29
                    You're probably thinking of the work VMware is doing. There's also another effort but AFAIK somebody doing it just for fun:
                    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

                    Comment


                    • #30
                      Well, the HowVideoCardsWork page hasn't been updated yet and isn't complete. PCIe and aperturesbare missing. But that doesn't mean I can't learn it. Would it be wise to learn it, or only of I wanted to study history?

                      I read the diff/sdiff/diff3 info page. Just to compare file differences... KDE has a nice file differences compare app as part of Kdevelop, which might be easyer for me (I think in visuals and not in a logical order, if you know what I mean ).

                      I have 'studied' some history of X (x386, xfree86, accelerated-x, xsun, X.org) and wondered if I, next to OpenGL and Gallium, have to worry about the Open Group X spec? Or is that X.org today?

                      Comment

                      Working...
                      X