Originally posted by TheOne
View Post
Announcement
Collapse
No announcement yet.
2D Performance Is Improving For Ubuntu 13.10 XMir
Collapse
X
-
Originally posted by mrugiero View PostTwo things. The first, no, you can't make it talk directly to Mir if using a fullscreen X server. The second, neither Wayland nor Mir are supposed to bring noticeable better performance than X for fullscreen games, for the simple reason none of them talk a lot to the X server. At most, they call GLXSwapBuffers (I'm not sure if that was the function name, but I think everyone understand what it does), which they will need to call anyway, or an equivalent. Most of the works is done via direct rendering, which should not be affected by none of Mir, X, or Wayland.
BTW. This disscustion Mir vs. Wayland at now is really fun. Both Display Servers are really similar from game developer point of view, so write two different backends aren't a problem. If Canonical want own DS who care? Welcome in open source world, where everyone can do what they want. I'm happy that X will be replaced soon by modern DS. Both Mir and Wayland teams did great job for Linux world. Competition always improve level of software. If someone have that many problems with Mir look what Intel did with OpenCL, this the same situation.Last edited by nadro; 11 September 2013, 04:55 AM.
Comment
-
Originally posted by TheBlackCat View PostHow is it "superior" to be running a rooted X inside Mir/Wayland? It is not that this sort of thing would be fundamentally impossible in XWayland, it is just that nobody implemented it because they didn't see any reason to. So what is the advantage of running rooted X inside XMir/XWayland?
Some DEs wont get ported to Wayland or Mir (XFCE for example have said they will stick to X).
So if you want to use XFCE with a flicker-free boot experience you will
have to use Mir as a system compositor, compositing the boot and the
login manager and then run XFCE in XMir.
Also I like the idea of full compatibility. Either it's a game, a program or
a DE it's nice to be able to run in in the compatibility layer. I think the
golden goal is that anything that runs in X shall be able to run in XMir
or XWayland without any changes.
And to everyone in this thread:
Thanks for finally having a thread were discussions works! I'm so sick
of the other threads that is so full of shitstorming. Thanks!
Comment
-
Originally posted by Pajn View PostSome test where actually (a tiny bit) better than pure Xorg.
This surprise me a bit. Is there anyone that can explain how
that is possible?
In X, we render directly to the screen.
In X+Unity, we render to a backbuffer, tell Unity about what is rendered, Unity copies that to its own backbuffer, then tells X to swap buffers (i.e. replace the scanout with Unity's backbuffer), which X does with a pageflip.
in X+Unity+XMir, we render to a backbbuffer, tell Unity about what is rendered, Unity copies that to its own backbuffer, then tells X to swap buffers, X then copies that to its own frontbuffer, and when Mir requests a new frame, copies that to the buffer provided by Mir. (Now with composite bypass enabled, the buffer provided by Mir is its back buffer, eliminating one of the many copies.) Mir than pageflips between its backbuffer and the scanout.
What you are seeing here is that most of these benchmarks are not stressing the graphics driver at all [QGears, gtkperf], or do not render to the screen [cairo-trace], and so they show very little difference for that extra two copies every 60Hz. The one that shows some difference, x11perf -comppixwin500, is really just a driver artifact - the error bar indicates that for one run, the driver got stuck using the BCS for the repeated copies rather than RCS. That happens irrespective of the compositing system - it just requires something else to render at the wrong time to trick the driver into believing that we are going to be using the BCS for the near future and so it then continues to use BCS to avoid the overhead of switching rings.
Throughput testing in applications seems to be around the 10% mark below X+Unity, which itself is about 30% slower than raw X, and we haven't even started to talk about the increased power consumption yet...
Comment
-
Originally posted by ickle View PostBased on the architecture it is not.
In X, we render directly to the screen.
In X+Unity, we render to a backbuffer, tell Unity about what is rendered, Unity copies that to its own backbuffer, then tells X to swap buffers (i.e. replace the scanout with Unity's backbuffer), which X does with a pageflip.
in X+Unity+XMir, we render to a backbbuffer, tell Unity about what is rendered, Unity copies that to its own backbuffer, then tells X to swap buffers, X then copies that to its own frontbuffer, and when Mir requests a new frame, copies that to the buffer provided by Mir. (Now with composite bypass enabled, the buffer provided by Mir is its back buffer, eliminating one of the many copies.) Mir than pageflips between its backbuffer and the scanout.
What you are seeing here is that most of these benchmarks are not stressing the graphics driver at all [QGears, gtkperf], or do not render to the screen [cairo-trace], and so they show very little difference for that extra two copies every 60Hz. The one that shows some difference, x11perf -comppixwin500, is really just a driver artifact - the error bar indicates that for one run, the driver got stuck using the BCS for the repeated copies rather than RCS. That happens irrespective of the compositing system - it just requires something else to render at the wrong time to trick the driver into believing that we are going to be using the BCS for the near future and so it then continues to use BCS to avoid the overhead of switching rings.
Throughput testing in applications seems to be around the 10% mark below X+Unity, which itself is about 30% slower than raw X, and we haven't even started to talk about the increased power consumption yet...
If I get you correct, what this basically means is that this kind of benchmarks
doesn't rely mean anything. The differences are too small and the "background
noise" are to high. Is that correct?
If so, I guess that would mean that this 10% extra overhead aren?t noticeable in
real usage as the "background noise" can be much higher than the slowdown itself.
If so that's pretty great, sure it's still a slowdown however that's what expected.
What would be more interesting than performance benchmarks would be power
consumption. Where a small performance degradation isn't noticeable even the
smallest extra power consumption is when running on battery.
Michael (I hope you read this forum):
Could you run some battery benchmarks on X, XWayland and XMir? I think that
would be very interesting. Especially considering that XWayland is rootless, as
that is what will mainly be used in the future.
It would also be nice if XMir were run with both Unity and something slimmer like
XFCE to see how much of the consumption is in the DE itself.
Comment
-
Originally posted by Pajn View PostThank you for a informative and interesting post!
If I get you correct, what this basically means is that this kind of benchmarks
doesn't rely mean anything. The differences are too small and the "background
noise" are to high. Is that correct?
If so, I guess that would mean that this 10% extra overhead aren?t noticeable in
real usage as the "background noise" can be much higher than the slowdown itself.
Originally posted by Pajn View PostIIf so that's pretty great, sure it's still a slowdown however that's what expected.
What would be more interesting than performance benchmarks would be power
consumption. Where a small performance degradation isn't noticeable even the
smallest extra power consumption is when running on battery.
Comment
-
Originally posted by TheBlackCat View PostHow is it "superior" to be running a rooted X inside Mir/Wayland? It is not that this sort of thing would be fundamentally impossible in XWayland, it is just that nobody implemented it because they didn't see any reason to. So what is the advantage of running rooted X inside XMir/XWayland?
Originally posted by Vim_User View PostAny link to that statement?
Link: http://wiki.xfce.org/releng/4.12/roadmap/gtk3
Comment
-
Originally posted by nadro View PostYou can talk directly to Mir even if fullscreen XMir session is active. What about fullscreen games? Remember about events. XEvent system isn't too fast, so here some performance improvements may come, anyway simple SwapBuffer should be little more efficient too in Mir and in Wayland than in X.
On the point about talking directly to Mir, what I mean is that you will be unable to use it from inside the X session. You could use it, but then you'd be fully restricted to using it fullscreen.
BTW. This disscustion Mir vs. Wayland at now is really fun. Both Display Servers are really similar from game developer point of view, so write two different backends aren't a problem. If Canonical want own DS who care? Welcome in open source world, where everyone can do what they want.
And writing two different backends IS a problem. Extra work means extra money, which means it's harder to meet the expected costs/benefits ratio a company would expect before porting anything.
I'm happy that X will be replaced soon by modern DS. Both Mir and Wayland teams did great job for Linux world. Competition always improve level of software. If someone have that many problems with Mir look what Intel did with OpenCL, this the same situation.
Also, Intel's own OpenCL implementation (which I already criticized in the appropriate thread, mind you) is a whole different thing than Mir, since software written for OpenCL should run on Intel's implementation as well as in Gallium. This means, their fragmentation is localized: it only means they aren't sharing the workload with Gallium. In the case of Mir versus Wayland, you need to write software for Mir or for Wayland. In the common case, this only means porting toolkits to an extra platform. In less common cases, this means writing two backends or supporting only a fraction of the Linux desktop. This is the same problem we have with toolkits.
Originally posted by Pajn View PostWayland and Mir hopes to finally give us a flicker-free boot experience.
Some DEs wont get ported to Wayland or Mir (XFCE for example have said they will stick to X).
So if you want to use XFCE with a flicker-free boot experience you will have to use Mir as a system compositor, compositing the boot and the login manager and then run XFCE in XMir.
Originally posted by TheOne View Postthat means they will have wayland support out of the box
XFCE said that for now they'll stick to X, they never stated something like "we will never use either Mir or Wayland". I think it's the sane decision for their limited workforce, to wait until the ones with most people do the hard bits and to have a somewhat easier porting. In the end, I'd expect them to use Wayland.
Also, I wouldn't sacrifice the whole use performance just for flicker-free boot. It's a personal choice, but I think it's the wrong one.
Also I like the idea of full compatibility. Either it's a game, a program or a DE it's nice to be able to run in in the compatibility layer. I think the golden goal is that anything that runs in X shall be able to run in XMir or XWayland without any changes.
I mean, having options and legacy compatibility is alright, but if you are actually making no use of the modern features, you are just wasting cycles in a middleware that does nothing. You keep dragging behind you all of the crufty problems of X.Last edited by mrugiero; 11 September 2013, 11:58 AM.
Comment
-
Originally posted by mrugiero View Post...
The window manager, a core part of any DE, talks directly to the X server. Porting to a different toolkit doesn't change this (I heard Qt5 is an exception to this, but I'm not really sure), and it means you need to explicitly port it to Wayland.
...
Comment
Comment