Announcement

Collapse
No announcement yet.

Theatre mode - maybe get involved?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Theatre mode - maybe get involved?

    The Windows driver for Radeon cards has support for something called "Theatre Mode", where video (overlay?) content, apart from being shown normally (windowed or fullscreen) on the primary screen, is simultaneously shown full screen on the TV. The TV output is blank/black when no video is shown.

    This is a function I really miss on my Linux machine, although I now have the TV connected to RGB signals from the VGA output, so I'd like to be able to select which output receives the "cinema content".

    An acceptable alternative would be to just drive the outputs separately and using the DISPLAY env variable to set output to the TV screen. This "sort of" works already, but dual screens means no xv/gl direct rendering, which means horrible stuttering of hd content on my primary screen. Besides, it's nice to be able to see the video on the two screens simultaneously.


    Now when documentation is starting to appear I feel very tempted to try to implement the functionality myself. I have all the background knowledge (am a master of computer science, have hardware-close and Linux kernel programming experience), but I have no idea where to start looking for what I want. 'radeon_dri.c'? 'radeon_video.c'? 'theatre.c'? Etc... and I guess since it is the _combination_ dual screen + direct rendering that doesn't seem to work, I'll need to dig deep.

    Simply: if I get involved with this, where should I put my time to most effectively get what I want? Perhaps cloned windows with compositing is a more modern and general approach?


    On a more general note - from what I can tell from the card's functionality, there are two "display engines" capable of reading data from video memory, each driving one video output (DVI-D, DVI-A, VGA, or TV). Is this correct?

    From the way MergedFB works I also guess that these engines cannot scale the output, so if the same framebuffer is to be displayed on two different devices, the best one can do is window the contents on the lower resolution device. Does this mean that "Theatre Mode" renders the video twice, once for each display?

    Is the second display engine in any way connected to the "Secondary Display Controller" ("chip 1002,4a69 card 1002,0003 rev 00 class 03,80,00 hdr 00") device detected by Xorg that always has "No matching Device section" (no matter if you use a secondary screen or not)? Or what is this Secondary Display Controller device?

    --
    Arvid

  • #2
    To get back to this; I tried dual head with xf86-video-ati-6.8.0 today, which resulted in "Server caught signal 11, aborting."

    But not before printing
    Code:
    RADEON(0): Direct Rendering Disabled -- Dual-head configuration is not working with DRI at present.
    to the log.

    So, again, is this a hardware (R420) limitation or a software one? If the latter, where is the limitation (DDX, DIX, ...) and what constitutes the limitation (i.e. what is needed to get it working)?

    I'm trying to read up on the X server but it's really difficult to know where to start looking without some help. I hope the R200 docs are released soon, with the "beginner intro" I hear they contain!

    Comment


    • #3
      I'm not sure how exactly this is implemented in the windows driver, but there are several options. First lets discuss terms a bit. The display controller is a general term for the part of the chip that controls outputs and associated things. This includes crtcs, plls, output encoders (DACs, TMDS, LVDS, etc.), panel scaler, tv encoder, etc. The crtcs and plls define the timing and clock for each head and the outputs drive the actual monitor. Each crtc can be sourced to any output, so crtc0 can drive LVDS and crtc1 can drive DAC1 or vice versa; you can even drive multiple outputs with the same crtc, provided the monitors with accept the same timing and clock. The tv encoder can source from either crtc or directly from the transform unit in the overlay.

      So for theater mode, I guessing the the tv encoder is either fed directly by the transform unit or a separate buffer is set up and one of the crtcs is pointed at it and sourced to the tv encoder and the video is displayed form there. Either way could be implemented.

      DRI works in dualhead mode, but only when both crtcs are scanning out of the same buffer (mergedfb or randr 1.2). In zaphod mode (screen based dualhead), the driver loads twice and each head gets it's own buffer. The DRI doesn't work with zaphod mode because the current dri stack doesn't deal with multiple framebuffers/drivers. You could probably get the DRI working with one of the two heads though in zaphod mode with a bit of work (IIRC some drivers support this).

      As for implementation, there's no common infrastructure for theatre mode, so you'd have to implement it as a driver specific thing. I'd make it a driver option or Xv option. Then, when it's set, steal a crtc or the tv encoder and set it up and turn it on when the user uses Xv.

      Comment


      • #4
        Ok.

        DRI on one of the heads sounds like a good start.

        I see there's a lot of documentation on DRI at the freedesktop dri page (http://dri.freedesktop.org/wiki/). I'll start there!

        Comment


        • #5
          Well... still reading

          I couldn't help looking at the code though, and I found RADEONPreInitDRI() in radeon_driver.c.

          If I understand correctly, the "ScrnInfoPtr pScrn" parameter is specific to a certain screen, and the pScrn->driverPrivate field is used to store, well, screen private data for the driver (in the form of a RADEONInfoPtr, which is a pointer to a huge struct also typedef'ed to RADEONInfoRec).

          This struct has the fields Bool IsPrimary, Bool IsSecondary and Bool directRenderingEnabled. So instead of just disabling DRI completely if "xf86IsEntityShared" I disabled it just for the secondary screen (i.e. I removed the check of xf86IsEntityShared in RADEONPreInitDRI).

          Result: the server starts with two screens as usual, glxinfo reports "direct rendering: Yes" for the primary screen and "direct rendering: No" for the secondary. It works fine until I use a DRI client (glxgears) on the primary, at which point the secondary screen is garbled.

          It seems the source of video data is changed for the secondary; it shows a garbled version of what's on the primary. When I move windows around on the primary they move around on the secondary too, but spread out and completely garbled (since the screens are different resolution, I guess).

          Also, for some reason xv video output seems to take about twice as much cpu as without dual head.

          Any idea how to continue?

          Comment


          • #6
            My vague recollection was that "theater mode" has to trick the OS (in this case the X server) into believing that there is only one screen, displayed twice.

            If the two screens are different resolution then they need to display out of a framebuffer which is at least as big as the larger one, and the smaller screen needs to know that the pitch (# of pixels work th of memory between rows) is different from the width (# of pixels visible across the screen). Not sure if radeon has a "virtual" capability these days (I think it does) but that would seem like the first step -- does the primary or secondary have the higher resolution right now ?

            There are a couple of different ways to handle multiple screens in the X world and I don't remember which is which right now -- you want a single framebuffer with both screens displaying out of the same frame buffer. Isn't there a "clone mode" which should do that, or does that use two separate servers ?
            Test signature

            Comment


            • #7
              For the Windows Catalyst kind of theatre mode I don't think cloning would work - the video could be minimized, run fullscreen or scaled to any size in the primary screen, while at the same time running maximized to the secondary screen's native resolution. So there were two concurrent scalings of the video going on.

              As you say, there is a clone mode, where the lower resolution screen is windowed into the higher resolution screen (or both screens are windowed into a higher virtual resolution frame buffer).

              Comment

              Working...
              X