Announcement

Collapse
No announcement yet.

Multi GPUs and Multi Monitors - A windows gamer wanting to use linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Multi GPUs and Multi Monitors - A windows gamer wanting to use linux

    With Steam just going open beta recently and it being a few years since I last tried a linux distribution on my desktop PC, I decided it was time to install 12.04 and see how things had changed. Everything was going smoothly until it came time to work on my monitor setup.

    I have 2 different ATI cards, a 5400 for my two side monitors, and a 7850 for my middle primary monitor and a TV connected via HDMI that is on occasionally for XBMC. Everytime I've tried to move to Ubuntu getting a multi monitor set up running has always brought me to a halt. The amount of misinformation and dispersity on topics like the xorg.conf config file, RandR, Xinerama, compositioning, hardware acceleration, driver support, compiz and Unity capabilities is staggering.

    Once installed, the most recent proprietary drivers from ATI detected all of my monitors and let me get them side to side in the proper configuration with no issue. Looking good, however enabling the multi monitor set up without Xinerama just made the other monitors completely unusable showing nothing but a blank desktop. The Xinerama option was available in the drivers menu and I was able to get it working well between my 3 monitors in KDE and Gnome2 (hotswapping the TV caused issues with the setup, Unity and Compiz are completely broken with Xinerama enabled) but then I found out that Xinerama was an outdated solution that disabled compositioning and hardware acceleration. So using the primary monitor for gaming wasn't an option.

    Then by chance I stumbled across RandR, on the proposed list of features on the Wikipedia article for RandR there is mentioned support for multiple GPUs in version 1.5. The only information I gathered on RandR was that finding information about it is difficult and that Phoronix had some tech demonstrations articles posted 6 months ago. So that's a no go as well.

    So what options are there and who's responsibility is it to incorporate multiple monitors on multiple gpus support with hardware acceleration and compositioning? ATI and Nvidias? Or Xorgs RandR? For a feature that is so easily enabled in Windows and the vendor drivers, it's very lacking in linux.

  • #2
    Using 5400 as output slave (all rendering will be done on 7850) is impossible in your case, because:
    1 Proprietary drivers doesn't support DMA-BUF due to legal issues (long story). Catalyst doesn't support it too.
    2 Using offloading/output slaves is possible with FOSS drivers r600g (for 5400) and radeonsi (7850) but radeonsi is not quite ready.
    So maybe you will use (mini)DisplayPort outputs that your 7850 already have? (Sure, in this case you will need DisplayPort to HDMI/DVI adapters.)
    Last edited by RussianNeuroMancer; 24 December 2012, 03:19 AM.

    Comment


    • #3
      Using 5400 as output slave (all rendering will be done on 7850) is impossible in your case, because:
      1 Proprietary drivers doesn't support DMA-BUF due to legal issues (long story). Catalyst doesn't support it too.
      2 Using offloading/output slaves is possible with FOSS drivers r600g (for 5400) and radeonsi (7850) but radeonsi is not quite ready.
      So maybe you will use (mini)DisplayPort outputs that your 7850 already have? (Sure, in this case you will need DisplayPort to HDMI/DVI adapters.)
      Then why does it work perfectly on Windows .. ?
      I don't want the 7850 to do all the rendering, only that on the primary monitor and TV. The 5400 would render everything on my other 2 monitors on either side on the primary.

      Comment


      • #4
        Originally posted by rrohbeck
        but fglrx (any version from 9.something to the latest 12.11beta) crashes reliably.
        Do you report this regression to AMD?
        Originally posted by Jekt View Post
        Then why does it work perfectly on Windows .. ?
        You ask wrong person - I doesn't know anything about WDDM. Maybe someone else will explain.

        Comment


        • #5
          Originally posted by RussianNeuroMancer View Post
          2 Using offloading/output slaves is possible with FOSS drivers r600g (for 5400) and radeonsi (7850) but radeonsi is not quite ready.
          Actually it is not possible right now. For radeonsi you absolutely need to have glamor in order to get 3d acceleration. glamor does still not work with X.org 1.13

          and the offload slaves appeared first in 1.13. There isn't even a git commit version of X where both glamor and output slaves work.

          There was another thread recently about multimonitor with multiple cards and ag5df did not recommend prime/dma-buf anyway... http://phoronix.com/forums/showthrea...792#post302792

          Comment


          • #6
            Originally posted by Jekt View Post
            The amount of misinformation and dispersity on topics like the xorg.conf config file, RandR, Xinerama, compositioning, hardware acceleration, driver support, compiz and Unity capabilities is staggering.
            Yes, that's quite the problem.

            Let me explain you what all these things do, so that you can understand the problem.

            First the xorg.conf ? as you probably already know this file used to contain all the configuration information. The requirement for this originates in a time where there was no automatic hardware identification, no hotplugging and everything had to be configured before even starting a program. On modern X.Org it is no longer required (almost): After many years of struggling with HAL, the weirdnesses of early UDev versions (well, what's to come is IMHO just as broken) X.Org finally managed to pull all the required configuration options automatically at startup. That is, if everything works together.

            The propriatary drivers unfortunately? don't. They need an extra invitation to nail down the configuration. But it's no longer as bad as it used to be.


            Xinerama ? some time ago X.Org did support something called Zaphod mode: A single X server could talk to several GPUs at once, for each spawning a separate X screen, but they'd share the input devices. Zaphod mode allowed for early kinds of multi monitor setups. But using separate X screens meant, that you couldn't drag a window from one screen to another. That's where Xinerama entered the picture. Instead of multiple X screens Xinearama would route the drawing operations to the various GPUs where possible and emulated a large framebuffer where needed; GPU routing did not work well thoush, so soon Xinerama became a large framebuffer emulation in the end? and hence slow.

            At this point NVidia decided they'd want to give people proper multi monitor support and developed TwinView. What this does is, that all the outputs of a graphics card are attached to the same video framebuffer, just with different viewports and used a Xinerama emulation to communicate this to the clients. If you think about it, using a single unified framebuffer is kind of obvious (although I personally like the original idea of Xinerama better, i.e. routing drawing commands).

            At some point all the drivers for hardware supporting multiple outputs did learn the unified framebuffer trick. At which point Xinerama became obsolete. But now something else needed to step in, to allow clients configuring and querying the screen.

            So XRandR was born. XRandR is a protocol extension to allow clients for querying and setting screen and monitor setups. And while people where at it, they concluded that adding features like rotating or reflecting a screen can be usefull indeed and made this a feature for further X.Org drivers. So XRandR is both a protocol and (RandR) a capability of hardware/drivers to manage multiple-outputs-on-unified-framebuffer configurations.

            However it came at a price. X.Org no longer supports the Zaphod mode. Which is a problem, because now it no longer can talk to independent GPUs at the same time. Where getting near your problem. So if you wanted to span a X display across multiple GPUs the burden is on the driver to properly distribute the drawing commands to the right hardware. And due to the unified framebuffer those GPUs must operate on a single memory address space. NVidia and ATI did solve this problem with SLi and CrossFire. But as you probably know for those to work all the GPUs must be of the same family.

            In your case however you're using GPUs of different families, where the whole unified framebuffer model breaks. A proper Xinerama router would be the solution for your problem, but unfortunately there is no such thing. And because Zaphod mode is no longer supported you can't even use the same X.Org instance for both GPUs.

            So what can you do? Well you can start multiple instances of X.Org. And then what? Well there's different options. One is to run DMX on top of them, but unfortunately things like OpenGL don't distribute well with DMX. The reason for this is, how the OpenGL ABI for GLX has been defined and the huge amount of headaces it produces. It's ironic: OpenGL was developed first on top of X11 by SGI, then ported over to Windows NT ? and the Windows way of dealing with OpenGL, i.e. having that intermediate layer of the common opengl32.dll which forwards the calls to the ICD is the better solution. GLX / Linux doesn't have something like this? yet. This year a number of interesting things got proposed. You can get some limited HW accelerated OpenGL support using VirtualGL.

            But unless something in the Linux graphics driver model changes fundementally, we'll be stuck with sub-par solutions. And Wayland won't help with this either because it doesn't solve the underlying problems. Yes, a Wayland compositor actually can implement some kind of Multi-GPU single desktop solution, but it's going to be messy.

            One of the most important changes, IMHO, would be, that GPU operations get decoupled from output framebuffers. Right now, when you open a graphics device you get exclusive access to it. Which means, if there are multiple X sessions running (or later Wayland) only the framebuffers associated with the currently active session can be accessed by the GPU (depending on the driver or GPU this may be done by completely swapping out the contents of the video RAM). Given the way we use GPUs these days this is crazy.

            IMHO GPUs should be treated as a co-processor that can be used by and program (given the right permissions) without requiring to have some on-screen framebuffer available. What a GPU renders should not go out to a display device directly, but to some portion of memory (the bandwidth of PCI-E does suffice for this). The output connectors (by which I mean the image transmitters) should not be depending to the GPUs RAM, but to a separate portion of memory and work independent from the GPU drawing operations. Programs like X.Org would only connect to the display transmitters which would act like a 1990-ies style VGA framebuffer-to-display adaptor with no HW drawing acceleration at all. And it should be possible to map the render output of GPU on card A to the display transmitter memory on card B.

            As it turns out, today's hardware is perfectly capable of doing this. It's just that the current Linux graphics driver model doesn't support it. And unfortunately those functions required to support it (DMA-BUF) have been locked down to GPL only, which means NVidia will probably never support it.

            The whole thing is a mess, and I'm disgusted by it. (Regarding my competence on the topic I suggest you check my StackOverflow profile)

            Comment


            • #7
              Originally posted by rrohbeck
              How do I do that? Is there a bugzilla or similar for fglrx?
              There used to be. But don't expect anything from it. About a year ago I thorougly tested fglrx for OpenGL implementation bugs, found quite a number of them. Wrote testcases and demonstration programs reliably triggering the bugs (including X.Org crashes and HW DoS, i.e. you'd have to reboot the machine), submitted it all. Never got even a status update.

              Regarding bugs in NVidia drivers: I usually report them directly to my contacts at NVidia, but last time I found a bug first in NVidia drivers was 2006.

              Comment


              • #8
                Originally posted by Jekt View Post
                With Steam just going open beta recently and it being a few years since I last tried a linux distribution on my desktop PC, I decided it was time to install 12.04 and see how things had changed. Everything was going smoothly until it came time to work on my monitor setup.

                I have 2 different ATI cards, a 5400 for my two side monitors, and a 7850 for my middle primary monitor and a TV connected via HDMI that is on occasionally for XBMC. Everytime I've tried to move to Ubuntu getting a multi monitor set up running has always brought me to a halt. The amount of misinformation and dispersity on topics like the xorg.conf config file, RandR, Xinerama, compositioning, hardware acceleration, driver support, compiz and Unity capabilities is staggering.

                Once installed, the most recent proprietary drivers from ATI detected all of my monitors and let me get them side to side in the proper configuration with no issue. Looking good, however enabling the multi monitor set up without Xinerama just made the other monitors completely unusable showing nothing but a blank desktop. The Xinerama option was available in the drivers menu and I was able to get it working well between my 3 monitors in KDE and Gnome2 (hotswapping the TV caused issues with the setup, Unity and Compiz are completely broken with Xinerama enabled) but then I found out that Xinerama was an outdated solution that disabled compositioning and hardware acceleration. So using the primary monitor for gaming wasn't an option.

                Then by chance I stumbled across RandR, on the proposed list of features on the Wikipedia article for RandR there is mentioned support for multiple GPUs in version 1.5. The only information I gathered on RandR was that finding information about it is difficult and that Phoronix had some tech demonstrations articles posted 6 months ago. So that's a no go as well.

                So what options are there and who's responsibility is it to incorporate multiple monitors on multiple gpus support with hardware acceleration and compositioning? ATI and Nvidias? Or Xorgs RandR? For a feature that is so easily enabled in Windows and the vendor drivers, it's very lacking in linux.

                I have a GTX 295, and although its one physical card, it actually has two hardware cards inside. It has 2 DVI ports on one, and an hdmi port on the other. Last I tried, you could only get two monitors on one card to work nicely with nvidia's "twinview" option. Otherwise you had to use xinerama or separate x servers...I don't know enough to know who to blame for this.

                Comment


                • #9
                  Originally posted by rrohbeck
                  How do I do that? Is there a bugzilla or similar for fglrx?
                  There is many ways to do that:
                  Tech. support: http://emailcustomercare.amd.com
                  Bugtracker: http://ati.cchtml.com
                  New forum for issues related to Steam: http://devgurus.amd.com/community/steam-linux
                  Feedback form: http://www.amd.com/us/LinuxCrewSurvey

                  Don't forget to attach /usr/share/fglrx/atigetsysteminfo.sh report.

                  Originally posted by datenwolf View Post
                  There used to be. But don't expect anything from it. About a year ago I thorougly tested fglrx for OpenGL implementation bugs, found quite a number of them. Wrote testcases and demonstration programs reliably triggering the bugs (including X.Org crashes and HW DoS, i.e. you'd have to reboot the machine), submitted it all. Never got even a status update.
                  They doesn't update status on this bugtracker. This bugtracker used only for submitting issues to the catalyst linux team.
                  So what status of issues you submitted in current driver? I mean does it still reproducible in Catalyst 12.10-12.11?

                  Originally posted by datenwolf View Post
                  Regarding bugs in NVidia drivers: I usually report them directly to my contacts at NVidia, but last time I found a bug first in NVidia drivers was 2006.
                  May you please give this link to your contacts in nVidia?
                  Morning, I got frequent Freeze/Crash of Xorg with drivers 310.19 with GTS 250 on 3.2.0-4-amd64 (current debian testing kernel) It happen quickly when i use any 3D function (compositing of the gnome shell desktop, launching a game via wine), but if i let the pc running all the night after reboot, it can run all the night without crashing. I can see in Xorg.0.conf an error and backtrace : [ 19206.280] (WW) NVIDIA(0): WAIT (2, 6, 0x8000, 0x0000ac2c, 0x0000b2b0) [ 19212.525] [mi] EQ overflowing...

                  This bug was submitted to nVidia tech. support in August, but in 313 driver release still doesn't fixed.

                  Originally posted by cybjanek View Post
                  I have a GTX 295, and although its one physical card, it actually has two hardware cards inside.
                  You probably mean something like RAMDAC (doesn't sure how this thing called for digital outputs this days).

                  Comment


                  • #10
                    Originally posted by RussianNeuroMancer View Post
                    May you please give this link to your contacts in nVidia?
                    Have you filed the bug as well with the [email protected]? Developers will usually give you their contact personally if they wish to be contacted as such.

                    Comment

                    Working...
                    X