They've got Marek.
Kudos Michael, that was a good set of benchmarks. Relevant variables clearly seperated out and compared, with some demanding tests included (even if most of them failed!) rather than just open source games with ridiculous frame rates.
They've got Marek.
I have muxless laptop with intel hd 4000 + radeon 7750M. So physically there is no display connected to discrete graphic card. I have the following results of unigine tropics benchmark (1024x768 with all high settings):
Software:Code:intel hd 4000 22.7 fps radeon 7750M (fglrx) 44.9 fps radeon 7750M (radeonsi auto) 9.6 fps radeon 7750M (radeonsi high) 24.1 fps
kernel 3.11.1 with dpm enabled
The rest is from today's git
With auto my radeon GPU is always in low state:
Only using high performance level helps me. In bug report 69395 you attached a patch. Will it help in my laptop's case?Code:uvd vclk: 0 dclk: 0 power level 0 sclk: 30000 mclk: 15000 vddc: 850 vddci: 900 pcie gen: 2
Also do you know what is the state of dynamic switching of discrete GPU? Dave merged nouveau patches for nvidia and his branch for radeons wasn't touched for three weeks. Is there a chance to have this support in 3.12 kernel?
Whats the status (whats possible, what can be handled?) of multi-GPU's under linux?
DMA-buf was merged months ago (if not over a year ago)-- Does radeon have support for DMA-buf buffers?
DMA-sync was a work in progress last I heard. Still a work in progress? Or was it merged? If it was merged, whats radeon's support?
Whats the default behavior for radeon currently when there's two graphics cards in one system? One discrete, one dedicated. Does radeon know to automatically keep the dedicated card off (unless there's a cable hooked into it) unless a load is being ran on it? Or does it keep it powered on burning power?
I'm asking because im building an amd(+amd in the future) system and I want to know what to expect from it. I won't need the dedicated card all the time, so plugging the HDMI cable into its port would be a massive waste of energy and heat. Therefore I'm curious what the state of support is for amd+amd hybrid graphics in the open source driver, if its enough to where I can have the cable be plugged into the motherboard (not the dedicated) but just use the dedicated card for gaming rendering when needed.
1. Desktop with IGD Radeon HD4200 + pcie GeForce 8600GT (and GTS250)
DMA-BUF (DRI-PRIME=1) works, but unfortunately nouveau driver is not very fast.
Also monitor sharing works, I was able to connect 2 monitors on motherboard (radon) and 1 on geforce and I got one extended desktop on 3 monitors.
I tried it few months ago. Here are images (you need to "view full resolution)
2. Laptop with IGD AMD + discrete AMD. I forgot what are actual cards, but DMA-BUF works (i didn't tried with DPM yet). Also poweroffing discrete card for better battery. There is bug when closing lid, when DIS card poweroff - need to restart machine. Didn't have time to investigate yet-
Nice article. Although this article was published on the 17th September 2013, the mesa-devel commit mentioned in the article's introduction was made on the 9th September 2013. The mesa radeonSI stream-out/transform feedback commit made on 12th September (covered elsewhere on Phoronix) might get a few more of the Unigine benchmarks working.
I have a similar configuration (intel hd4000 + radeon hd 7730M)
I've worked on Prime on Wayland, And I can tell you that when it is ready, you'll get better flexibility,
and probably you'll be able to get better performance for your dedicated card (either on XWayland or for Wayland games).
I'm not fully sure how X handle dedicated card, but depending on how it does that, you should get better performance.
On Wayland, you can choose between:
. run an application on the dedicated card while everything else is on the integrated card: The buffers in which the application is rendered are shared between the two cards,
And to allow the two card to be able to understand the content of the buffer, tiling is disabled (an other way to handle that would be to have an intermediate buffer to copy to, that would have some benefits and some drawbacks. I'm not sure which solution X uses. You use this way with a trick).
. launch a compositor on the dedicated card (XWayland run on the dedicated card, ...) and that allow the card to use tiling and render applications in VRAM. The final buffer is imported as a framebuffer to display for the integrated card (no copy). It is possible to totally bypass rendering anything on the integrated card if we want.
You can have embedded compositor on Wayland, so if you are on the integrated card, just launch a fullscreen compositor on the dedicated card, and everything will be as if you launched the main compositor on the dedicated card. (clients would use tiling and VRAM. The end fullscreen buffer would be imported as framebuffer).
Synchonisation with the refresh rate of the screen, when rendering on the dedicated card, works on Wayland (not on X if I have well understood). (Do not mistake with DMA-buf sync, which is not ready. For the moment we get glitches with the first solution I presented)
I'll post some glmark2 benchmarks of both solutions on the Wayland thread
Alex, thank you for your responses and awesome work!