AMD RadeonSI Graphics Driver Still Troublesome On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Rakot
    replied
    Originally posted by darkbasic View Post
    Hi,
    The patch did work for me, you should try it

    Also, if someone is interested here is my backported graphic stack for the kernel 3.11.1:
    http://www.linuxsystems.it/linux-drm...ack-backports/
    I saw you bug report. I'll give your patches a try. Thanks.

    Leave a comment:


  • darkbasic
    replied
    Originally posted by Rakot View Post
    Hello, Alex,

    I have muxless laptop with intel hd 4000 + radeon 7750M. So physically there is no display connected to discrete graphic card. I have the following results of unigine tropics benchmark (1024x768 with all high settings):
    Code:
    intel hd 4000                   22.7 fps
    radeon 7750M (fglrx)            44.9 fps
    radeon 7750M (radeonsi auto)     9.6 fps
    radeon 7750M (radeonsi high)    24.1 fps
    Software:
    kernel 3.11.1 with dpm enabled
    The rest is from today's git

    With auto my radeon GPU is always in low state:
    Code:
    uvd    vclk: 0 dclk: 0
    power level 0    sclk: 30000 mclk: 15000 vddc: 850 vddci: 900 pcie gen: 2
    Only using high performance level helps me. In bug report 69395 you attached a patch. Will it help in my laptop's case?

    Also do you know what is the state of dynamic switching of discrete GPU? Dave merged nouveau patches for nvidia and his branch for radeons wasn't touched for three weeks. Is there a chance to have this support in 3.12 kernel?
    Hi,
    The patch did work for me, you should try it

    Also, if someone is interested here is my backported graphic stack for the kernel 3.11.1:

    Leave a comment:


  • MrCooper
    replied
    Originally posted by madbiologist View Post
    Also, this commit made on 10th September 2013, fixes hangs in Unigine Tropics and various games - see FDO bug 69340
    The bug fixed by that commit also causes most textures to be untiled, which could have a negative impact on performance.

    Leave a comment:


  • Rakot
    replied
    Alex, thank you for your responses and awesome work!

    Leave a comment:


  • madbiologist
    replied
    Originally posted by madbiologist View Post
    Nice article. Although this article was published on the 17th September 2013, the mesa-devel commit mentioned in the article's introduction was made on the 9th September 2013. The mesa radeonSI stream-out/transform feedback commit made on 12th September (covered elsewhere on Phoronix) might get a few more of the Unigine benchmarks working.
    Also, this commit made on 10th September 2013, fixes hangs in Unigine Tropics and various games - see FDO bug 69340

    Leave a comment:


  • mannerov
    replied
    Originally posted by Rakot View Post
    Hello, Alex,

    I have muxless laptop with intel hd 4000 + radeon 7750M. So physically there is no display connected to discrete graphic card. I have the following results of unigine tropics benchmark (1024x768 with all high settings):
    Code:
    intel hd 4000                   22.7 fps
    radeon 7750M (fglrx)            44.9 fps
    radeon 7750M (radeonsi auto)     9.6 fps
    radeon 7750M (radeonsi high)    24.1 fps
    Thank you for sharing your benchmarks.

    I have a similar configuration (intel hd4000 + radeon hd 7730M)

    I've worked on Prime on Wayland, And I can tell you that when it is ready, you'll get better flexibility,
    and probably you'll be able to get better performance for your dedicated card (either on XWayland or for Wayland games).
    I'm not fully sure how X handle dedicated card, but depending on how it does that, you should get better performance.

    On Wayland, you can choose between:
    . run an application on the dedicated card while everything else is on the integrated card: The buffers in which the application is rendered are shared between the two cards,
    And to allow the two card to be able to understand the content of the buffer, tiling is disabled (an other way to handle that would be to have an intermediate buffer to copy to, that would have some benefits and some drawbacks. I'm not sure which solution X uses. You use this way with a trick).
    . launch a compositor on the dedicated card (XWayland run on the dedicated card, ...) and that allow the card to use tiling and render applications in VRAM. The final buffer is imported as a framebuffer to display for the integrated card (no copy). It is possible to totally bypass rendering anything on the integrated card if we want.

    You can have embedded compositor on Wayland, so if you are on the integrated card, just launch a fullscreen compositor on the dedicated card, and everything will be as if you launched the main compositor on the dedicated card. (clients would use tiling and VRAM. The end fullscreen buffer would be imported as framebuffer).

    Synchonisation with the refresh rate of the screen, when rendering on the dedicated card, works on Wayland (not on X if I have well understood). (Do not mistake with DMA-buf sync, which is not ready. For the moment we get glitches with the first solution I presented)

    I'll post some glmark2 benchmarks of both solutions on the Wayland thread

    Leave a comment:


  • madbiologist
    replied
    Nice article. Although this article was published on the 17th September 2013, the mesa-devel commit mentioned in the article's introduction was made on the 9th September 2013. The mesa radeonSI stream-out/transform feedback commit made on 12th September (covered elsewhere on Phoronix) might get a few more of the Unigine benchmarks working.

    Leave a comment:


  • gsedej
    replied
    Originally posted by Ericg View Post
    Alex, just for clarification of those reading this forum thread..

    Whats the status (whats possible, what can be handled?) of multi-GPU's under linux?

    DMA-buf was merged months ago (if not over a year ago)-- Does radeon have support for DMA-buf buffers?

    DMA-sync was a work in progress last I heard. Still a work in progress? Or was it merged? If it was merged, whats radeon's support?
    It's not bad. I have access to 2 machines at work.

    1. Desktop with IGD Radeon HD4200 + pcie GeForce 8600GT (and GTS250)
    DMA-BUF (DRI-PRIME=1) works, but unfortunately nouveau driver is not very fast.
    Also monitor sharing works, I was able to connect 2 monitors on motherboard (radon) and 1 on geforce and I got one extended desktop on 3 monitors.
    I tried it few months ago. Here are images (you need to "view full resolution)


    2. Laptop with IGD AMD + discrete AMD. I forgot what are actual cards, but DMA-BUF works (i didn't tried with DPM yet). Also poweroffing discrete card for better battery. There is bug when closing lid, when DIS card poweroff - need to restart machine. Didn't have time to investigate yet-

    Leave a comment:


  • Ericg
    replied
    Originally posted by agd5f View Post
    Yes, that patch should help in your case. I just need to sort out how to properly handle that dynamically.
    The merge window for 3.12 has closed. Hopefully we'll have something for 3.13.
    Alex, just for clarification of those reading this forum thread..

    Whats the status (whats possible, what can be handled?) of multi-GPU's under linux?

    DMA-buf was merged months ago (if not over a year ago)-- Does radeon have support for DMA-buf buffers?

    DMA-sync was a work in progress last I heard. Still a work in progress? Or was it merged? If it was merged, whats radeon's support?

    Whats the default behavior for radeon currently when there's two graphics cards in one system? One discrete, one dedicated. Does radeon know to automatically keep the dedicated card off (unless there's a cable hooked into it) unless a load is being ran on it? Or does it keep it powered on burning power?


    I'm asking because im building an amd(+amd in the future) system and I want to know what to expect from it. I won't need the dedicated card all the time, so plugging the HDMI cable into its port would be a massive waste of energy and heat. Therefore I'm curious what the state of support is for amd+amd hybrid graphics in the open source driver, if its enough to where I can have the cable be plugged into the motherboard (not the dedicated) but just use the dedicated card for gaming rendering when needed.

    Leave a comment:


  • agd5f
    replied
    Originally posted by Rakot View Post
    Only using high performance level helps me. In bug report 69395 you attached a patch. Will it help in my laptop's case?
    Yes, that patch should help in your case. I just need to sort out how to properly handle that dynamically.

    Originally posted by Rakot View Post
    Also do you know what is the state of dynamic switching of discrete GPU? Dave merged nouveau patches for nvidia and his branch for radeons wasn't touched for three weeks. Is there a chance to have this support in 3.12 kernel?
    The merge window for 3.12 has closed. Hopefully we'll have something for 3.13.

    Leave a comment:

Working...
X