Announcement

Collapse
No announcement yet.

Intel Linux Graphics On Ubuntu Still Flaky

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • phoronix
    started a topic Intel Linux Graphics On Ubuntu Still Flaky

    Intel Linux Graphics On Ubuntu Still Flaky

    Phoronix: Intel Linux Graphics On Ubuntu Still Flaky

    Back in May we shared that the Ubuntu Intel graphics performance was still in bad shape after testing out very early Ubuntu 9.10 packages. The netbook experience was killed in Ubuntu 9.04 after a buggy Intel Linux graphics stack led to slow performance, stability issues, screen corruption, and other problems. Months have passed since we last exhaustively looked at the Intel Linux graphics stack, but we have just carried out some new tests using Ubuntu 9.10 Alpha 3. This new development release of Ubuntu carries the latest kernel, Mesa, and Intel driver packages as we see how the graphics performance is with an Intel 945 and G43 chipsets.

    http://www.phoronix.com/vr.php?view=14082

  • Casandro
    replied
    Great improvement over 9.04

    I mean I recently switched from 9.4 to 9.10 and it was totally worth it. With 9.4 I had regular screen corruption on KDE and now 9.10 even supports desktop effects, flawlessly.

    The only complaint I have is that it doesn't support anything but the VGA output of my motherboard. So no DVI and no dual-monitor setups. That's frustrating.

    Leave a comment:


  • combuster
    replied
    I've posted my results a few pages back, just didn't mentioned them on freedesktop (I was running a full set of gtkperf tests at once)... I don't know how Michael got those results but that could be because of some update that came up in ubuntu repo's for 9.10 and Gordon couldn't replicate it. As for myself I'm runing my test's with GM965 and Arch linux with custom kernel and not with G43 and Ubuntu so first I thought that G43 was the only one affected by this regression but appearantly it wasn't either... But many of my fellow archers are having the same experience with 2.6.31 kernel, Intel gpu's runs faster and that is a fact...

    http://www.phoronix.com/forums/showp...8&postcount=28

    Leave a comment:


  • squirrl
    replied
    Pull the latest mesa git.

    It's running smooth in ut2004 now ...
    no idea why yet I haven't read the changelog.. sleep

    Leave a comment:


  • squirrl
    replied
    Unreal Tournament 2004 -
    800x600 it's ok. Playable.
    1024x768 it starts stuttering.
    1280x1024 forget it.
    ---------------------------------
    Glxgears -
    800 -> 1100 fps sometimes 24000 fps <-- weird
    ----------------------------------
    Nexuiz - awful but manages
    ----------------------------------
    Blender 3D latest
    trash
    ----------------------------------
    KMS has some quirks. External LCD's suffer.
    Kernel Mode Setting.
    ----------------------------------
    I own a few games and I've run a few tests and I'm sick of hearing about performance improvements from the Intel developers. They need to shut up and fix this. No more announcements until it's fixed.

    Ubuntu needs to back port fixes!
    ----------------------------------------------------

    http://anholt.livejournal.com/41306.html yeah right

    Intel right now sucks on every distrubution.

    Leave a comment:


  • kxmas
    replied
    Originally posted by combuster View Post
    Me too. Similar hardware should yield similar results.

    So, people that use the new driver and kernel don't have proof that the results are wrong, but their experience says they're wrong.

    Now, we have someone with the same type hardware fail to reproduce the results. Puzzling....

    Leave a comment:


  • mdmadph
    replied
    God, that's horrible performance

    Leave a comment:


  • combuster
    replied
    I'm really confused now...

    http://bugs.freedesktop.org/show_bug.cgi?id=23083#c6

    Leave a comment:


  • 7oby
    replied
    Originally posted by Linuxhippy View Post
    However, at least the JXRenderMark tests were developed to emulate common paths of Java2D's XRender pipeline.
    I appreciate the Java XRender effort and the posted benchmark as well.

    It seems the general user or at least I will not profit from it:

    . requires "-Dsun.java2d.xrender=True" launch option, which only very few people are aware of and know how to enable. Definitely not the out-of-the-box experience - besides possible visual corruption bugs.

    . The only major java app that I use is Eclipse, which is based on SWT for GTK+ and though GTK+ uses XRender I don't know whether there's an intersection of the functions used by GTK+ and JXRender Benchmark.

    . Most java apps that I write are headless server apps and backend components. However I once wrote an application for a digital astronomical camera, which made heavy use of Java2D and in particular "BufferedImage" to store and manipulate 16-bit grayscale images (histograms, gamma corrections, zooming). However: "BufferedImageOps are not accalerated because XRender does not provide functionality required to accalerate those features."

    Seems I currently don't have a use case for XRender performance for me. But as I said: The effort itself is very valuable.

    --

    To determine whether Ubuntu Y.X has regressed or improved user performance I suggest four benchmarks measuring

    . compiz
    . firefox
    . Adobe flash
    . one OpenGL based game

    performance. I also appreciate micro benchmarks, but meaningful ones. Even using 2560 x 1600 displays I can't image of an application that is limited by GTK Radio Button rendering performance, but I immediately recognize if scrolling performance degrades. I don't know whether the latter is included in one of the low level benchmarks.

    Storage Review provides a very clear explanation of their testing methodology:
    http://www.storagereview.com/Testbed4.sr
    You learn what each micro benchmark measures and how to judge whether this particular aspect matters to you or not.

    Maybe phoronix can provide something like this and we start a DISCUSSION about which tests to include for reviews.
    Last edited by 7oby; 08-05-2009, 03:27 AM.

    Leave a comment:


  • mtippett
    replied
    Originally posted by Linuxhippy View Post
    However, at least the JXRenderMark tests were developed to emulate common paths of Java2D's XRender pipeline.
    So you are right, JXRenderMark is a microbenchmark, however it was been designed to mimik real-world behaviour of real-world to allow driver developers to optimize their drivers for some piece of real software.
    I agree with aspects of this post.

    A suitable analogy is that of a big city. You can look at the macro effeciency of the city's ability to move people around, or to a particular event - this is like a "real world" benchmark. You can then profile different control points to determine if there is some part of the urban planning (the architecture) that is flawed for a particular traffic flow.

    To "manage" the urban traffic flow almost every city puts in traffic flow information around traffic lights as well as on the major traffic paths ("known choke points" and "common code paths").

    Ideally these paths allow characterization of the bigger problems and if you keep traffic flowing at these check points you can generally keep the city flowing nicely.

    Now of course, the difficulty is choosing the check points that represent the common traffic (or performance) conditions that will affect real world usage.

    A second issue to consider is relevance of the benchmarks. x11perf - in it's day was a great tool for analyzing the entire Xlib path. There were stippled lines, ellipses and everything that the athena toolkit used on a regular basis, so if you had a performance issue reported, you could contrast old x11perf results agains new results. Most likely in any given performance issue, there was a particular test that lay fair and square on the active path for the performance issue.

    Most developers have neither the time, nor the interest to create a test app that covers the large proportion of their API. Consequently we start moving out to large applications and hoping to benchmark them. To benchmark them you need either a higher level benchmark framework to capture a profile, or you need intrinsic benchmarkability somewhere in the pipe.

    You still end up creating a representative benchmark for a particular application flow. If you as a user don't have the same usage pattern similar to the "real world application" benchmark, you basically out of luck. Likewise, if the application changes and the benchmark isn't updated, you may be tuning for the wrong thing.

    A great example of this is Wine. They have switched recently to using RENDER for doing most of their display compositing. If a representative benchmark doesn't include RENDER, then the real-world benchmark isn't worth anything anymore. *BUT*, you will know that a micro-benchmark (such as, say, JXRenderMark), will probably give you an indication of the RENDER performance.

    Each has their place, but something is better than nothing.

    The problem is this:
    If you report a performance bug against a large piece of software, they won't listen because they don't want to investigate it on their own. If you write small benchmarks you are ignored, because thats only microbenchmarks.
    Some JXRenderMark tests show a horrible regression, e.g. scanline based rendering got 50% slower.
    So if your app is not cairo based *caught, caught*, you are out of luck!

    ...
    Another great point.

    I proxy a large number of issues into AMD. When an issue gets my attention and begins to move into the driver team, I generally refuse to bring in a either a macro benchmark or a monolithic application.

    I ask for either a representative benchmark of the bottleneck they are expecting, or I ask for a reduced test case that demonstrates the problem.

    Most developers are aware of the behaviour of their code, and what happens under different work loads. Unsurprisingly, due to their understanding of the code getting the above doesn't cause too many issues. We typically get a smaller test-case demonstrating some problem or we get pointed to other micro-benchmarks that can serve as a proxy.

    Thanks for the great points.

    Leave a comment:

Working...
X