Announcement

Collapse
No announcement yet.

AMD Catalyst vs. X.Org Radeon Driver 2D Performance

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • highlandsun
    replied
    Sigh... Even on my 8MHz 68000 Atari ST, opening new windows would just POP onto the screen. It's amazing the amount of visible lag people are willing to put up with these days, and that's not even talking about the worse culprit of scrolling a browser window...

    2D performance affects *every* computer user, 3D performance only affects the minority of computer users who play 3D games. People have really got their priorities messed up.

    Leave a comment:


  • drag
    replied
    Well Linux's 2D stuff has always been 'faster' then Windows per say.

    But really nobody cares that much about 2D performance. In Vista when your running your composted desktop you have zero hardware acceleration going into doing 2d rendering.

    If that goes to show you how much that matters.

    With Linux 2D on composited desktop I think it's more of a matter all the context changes that the drivers have to go through to convert the X Windows 2D driver's world to the Linux DRM/DRI managed world. Like you have to render the item off-screen, then capture the image, then convert the image to something that can be managed by the 3D drivers then copy that image to texture and then render that texture as a image that we call the desktop.

    So many steps. At that point it doesn't really matter if you have a very fast CPU or very fast GPU or anything like that. It doesn't really matter that the conversion gets done fast, either. There can be thousands and thousands of cpu cycles wasted for each one of those steps... reading in instructions form main memory, loading them into cache, executing them, sucking in textures from memory, etc etc. etc. Each time you do a context switch your purging out your cache and starting over and wasting just all sorts of cpu/gpu.

    I mean RAM may seem fast, especially compared to disk, but your CPU/GPU will burn through thousands of wasted cycles waiting for information to come in from main memory or over that PCIe bus.


    The 'correct' way ot manage all of that would be to render the application off screen, and have the output write directly to 3D texture that is mapped to the 3D primitive that is then used as part of your desktop image.

    Something like that that can be done in as close to a single operation as possible.

    But the current driver model for Linux won't allow something like that. X.org world and Linux-DRM world is just to heavily split. They were never designed to work together very very closely... instead you just assigned a hunk of the screen for X to render to, then assigned a smaller hunk of hte screen for the OpenGL stuff to be rendered to. That's what the 'overlay' provides and it is fast, but it's ugly. That's how it's designed to work.

    ---------------------------------------

    You see the trick with Linux right now is you have 2 entirely different set of drivers sharing the same single video card. You have the 2D Xorg drivers and then the Linux-DRM/DRI-based 3D drivers.

    X.org X Server goes down and performs such actions as configuring PCI devices and modesetting outside of the context of the Linux kernel's control.

    So you end up with situations were Linux is configuring PCI devices and doing something like that and X comes along and stomps on it and causes your video card to flake out.

    ---------------------------------------


    So I suppose with Intel's UXA framework it will be much more efficient.

    Instead of worrying about getting the 2D drivers working better or porting the 2D X drivers to the 3D Linux-DRM world, they just rewrote the drivers from scratch and implemented the EXA API using the Linux-DRM 3D-related core.

    That way you end up with compatibility with current applications, but you render everything directly using the 3D engine. So instead of doing EXA in the 2D engine on the card you do it on the '3D' engine.

    That will probably actually end up being slower in benchmarks then just doing 2D-only with no composition, but it doesn't really matter because it'll be fast enough and it'll make it much easier to deal with performance issues that matter... such as video playback acceleration, better composited desktops, a more stable/saner 1-driver design, faster 3D performance, etc.

    Leave a comment:


  • highlandsun
    replied
    Would these gtk test programs show meaningful/comparable numbers when ported to Windows? Is there any way to see if these drivers in Linux are really squeezing out all the performance the hardware has to offer, or if the Windows drivers still have some unexplored tricks to leverage?

    Leave a comment:


  • korpenkraxar
    replied
    Originally posted by DoDoENT View Post
    With stock configuration I get 30-40 minutes longer battery life with fglrx than with radeon driver.
    Me too running Debian Sid on a Thinkpad and using a X1400 Radeon. According to ThinkWiki over at
    http://www.thinkwiki.org/wiki/How_to...ement_features
    Xorg's log file should confirm that scaling is indeed enabled once you specify the DynamicClocks option in the Device section. It does not do that in my case and I have absolutely no idea what to do next :-(

    Ideas anyone?

    Leave a comment:


  • NeoBrain
    replied
    Originally posted by nbi1 View Post
    17 out of 28?? All that time was spent on an article to reach that determination? Who exactly are the intended beneficiaries of this info when most of us can't even get 8.12 installed and working properly under linux? Just have a look at your own forums.
    Well, you must take note of the fact, that the people, on whose systems the driver runs just fine, don't rant as much as the people, on whose systems it fails...
    Occasionally you'll see some statements like "most stable driver ever" or "never had a problem with it actually", but it just gets overwhelmed by people who write about 5 or more posts about their problem, and thus you get the impression that fglrx doesn't work on most systems.
    And honestly, if you've got an X600 with a PCI-AGP bridge (uhm... if these existed for the X600 familiy already, but you get the idea), it's quite probable that these chips aren't tested that well (apart from the fact that it's just too much maintainance work for most vendors).

    Another point is that many people first try to generate rpm or deb files, e.g. as they keep the system cleaner... I, for example, never really could get the driver running with this method. On the other hand, since I'm using the automated installer by fglrx, installation works faster and more reliable than otherwise (even livna repos gave me problems at some point).

    Leave a comment:


  • nbi1
    replied
    17 out of 28?? All that time was spent on an article to reach that determination? Who exactly are the intended beneficiaries of this info when most of us can't even get 8.12 installed and working properly under linux? Just have a look at your own forums.

    This article would have hit the spot if the majority of us have the option of using either proprietary or open source drivers with equal ease. Until we get there though a much better article would have been a comprehensive guide to getting 8.12 working with a particular kernel for a particular distro. Don't get me wrong I appreciate such articles, but at this point in time the 2D issue is a non-sequitor.

    Cheers.

    Leave a comment:


  • DoDoENT
    replied
    Originally posted by mile View Post
    I am using powerplay with radeon driver on my mobility x1600 for a few months now and it is working great..you need few patches from this brantch I think

    http://cgit.freedesktop.org/~agd5f/x...=agd-powerplay
    It seems that commits to this branch are quite old (April 2008). Are you sure this isn't implemented in Ubuntu intrepid's default radeon driver? Because if it is, then what do I have to add to my xorg.conf to enable it? With stock configuration I get 30-40 minutes longer battery life with fglrx than with radeon driver.

    Leave a comment:


  • dungeon
    replied
    Maybe EXA is not good for r200, who knows. I experience the same performance drop as you had, plus EXA has problems with Mesa also and native apps. Try ppracer and turn on fps counter and you will see on the end of each level 0(zero) fps.

    All this is good with XAA.

    Leave a comment:


  • suokko
    replied
    It would be intresting to test XAA vs EXA also. I have noticed at least when using some windows applications in wine EXA has horrible 2D performance. EXA takes 100% CPU usage at 2.1GHz with EXA while XAA takes only around 10 % when cpu is running at 796MHz. Xserver (Xorg 1.5.2) takes nearly all cpu time when using EXA.

    To me it seems like either wine is too aggresively optimized for XAA or EXA has some hiden performance penalty for the poor drawing code of wine. This is surprising because everything else has visible speed up when I turn EXA on in my system.

    I'm using R200 series mobility radeon.

    Leave a comment:


  • Michael
    replied
    EXA testing was used. Sorry if that wasn't clear in the article.

    Leave a comment:

Working...
X