Page 2 of 2 FirstFirst 12
Results 11 to 19 of 19

Thread: Multi GPUs and Multi Monitors - A windows gamer wanting to use linux

  1. #11

    Default

    Quote Originally Posted by rrohbeck View Post
    How do I do that? Is there a bugzilla or similar for fglrx?
    There is many ways to do that:
    Tech. support: http://emailcustomercare.amd.com
    Bugtracker: http://ati.cchtml.com
    New forum for issues related to Steam: http://devgurus.amd.com/community/steam-linux
    Feedback form: http://www.amd.com/us/LinuxCrewSurvey

    Don't forget to attach /usr/share/fglrx/atigetsysteminfo.sh report.

    Quote Originally Posted by datenwolf View Post
    There used to be. But don't expect anything from it. About a year ago I thorougly tested fglrx for OpenGL implementation bugs, found quite a number of them. Wrote testcases and demonstration programs reliably triggering the bugs (including X.Org crashes and HW DoS, i.e. you'd have to reboot the machine), submitted it all. Never got even a status update.
    They doesn't update status on this bugtracker. This bugtracker used only for submitting issues to the catalyst linux team.
    So what status of issues you submitted in current driver? I mean does it still reproducible in Catalyst 12.10-12.11?

    Quote Originally Posted by datenwolf View Post
    Regarding bugs in NVidia drivers: I usually report them directly to my contacts at NVidia, but last time I found a bug first in NVidia drivers was 2006.
    May you please give this link to your contacts in nVidia?
    https://devtalk.nvidia.com/default/t...3-2-0-4-amd64/
    This bug was submitted to nVidia tech. support in August, but in 313 driver release still doesn't fixed.

    Quote Originally Posted by cybjanek View Post
    I have a GTX 295, and although its one physical card, it actually has two hardware cards inside.
    You probably mean something like RAMDAC (doesn't sure how this thing called for digital outputs this days).

  2. #12
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,583

    Default

    Quote Originally Posted by RussianNeuroMancer View Post
    May you please give this link to your contacts in nVidia?
    Have you filed the bug as well with the linux-bugs@nvidia.com? Developers will usually give you their contact personally if they wish to be contacted as such.

  3. #13

    Default

    Quote Originally Posted by deanjo View Post
    Have you filed the bug as well with the linux-bugs@nvidia.com?
    No. Is it really necessary after submitting issue to tech. support and to official forum also send it to linux-bugs@nvidia.com?

  4. #14
    Join Date
    Dec 2012
    Posts
    3

    Default

    Thanks for the writeup Datenwolf, appreciate it a lot.
    So do you think I would have more success with a multi gpu setup of the same family? I've also been reading into using the Eyefinity display ports on my 7850. I was looking at trying to use the 7850 for everything. HDMI for the TV, 2x mini display port to DVI adapters and the last DVI port.

  5. #15
    Join Date
    Dec 2007
    Posts
    2,337

    Default

    The only way to use multi-GPU multi-display is with zaphod mode and optionally xinerama if you want all the displays to act as one big desktop. The xserver still supports zaphod mode and xinerama. Basically no one has written to the code yet to support this in X like it is in windows. The prime/hotplug stuff Dave recently landed in the latest xserver lays the groundwork, but it has not yet been extended to support multi-GPU multi-display. Finishing that would be a good project for someone looking to get a deep understanding of xserver internals.

  6. #16
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,583

    Default

    Quote Originally Posted by RussianNeuroMancer View Post
    No. Is it really necessary after submitting issue to tech. support and to official forum also send it to linux-bugs@nvidia.com?
    Yes. Developers visit the forums but the only way to get a 1:1 contact on issues is to use the email.

  7. #17

    Default

    Quote Originally Posted by deanjo View Post
    Quote Originally Posted by RussianNeuroMancer View Post
    Quote Originally Posted by deanjo View Post
    Have you filed the bug as well with the linux-bugs@nvidia.com? Developers will usually give you their contact personally if they wish to be contacted as such.
    No. Is it really necessary after submitting issue to tech. support and to official forum also send it to linux-bugs@nvidia.com?
    Yes. Developers visit the forums but the only way to get a 1:1 contact on issues is to use the email.
    Okay, I rephrase my question: Is it really necessary after submitting issue to tech. support also send it to linux-bugs@nvidia.com?

  8. #18
    Join Date
    Nov 2011
    Location
    Orange County, CA
    Posts
    76

    Default

    I finally documented and submitted my crash to AMD. See here:
    http://ati.cchtml.com/show_bug.cgi?id=687

    I got the open source driver to work somewhat with radeonsi and glamor, but it still crashes when I try to use xrandr to rotate a monitor into portrait orientation.

  9. #19
    Join Date
    Jun 2012
    Posts
    340

    Default

    Quote Originally Posted by datenwolf View Post
    IMHO GPUs should be treated as a co-processor that can be used by and program (given the right permissions) without requiring to have some on-screen framebuffer available. What a GPU renders should not go out to a display device directly, but to some portion of memory (the bandwidth of PCI-E does suffice for this). The output connectors (by which I mean the image transmitters) should not be depending to the GPUs RAM, but to a separate portion of memory and work independent from the GPU drawing operations. Programs like X.Org would only connect to the display transmitters which would act like a 1990-ies style VGA framebuffer-to-display adaptor with no HW drawing acceleration at all. And it should be possible to map the render output of GPU on card A to the display transmitter memory on card B.
    I agree 100%. GPU's should no be considered as co-processors and it should be possible to pass information between them if so desired.

    As it turns out, today's hardware is perfectly capable of doing this. It's just that the current Linux graphics driver model doesn't support it. And unfortunately those functions required to support it (DMA-BUF) have been locked down to GPL only, which means NVidia will probably never support it.
    Ah, the downsides of the GPL license...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •