Announcement

Collapse
No announcement yet.

Avivo Linux R500 Driver v0.1.0 Coming

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • glisse
    replied
    Originally posted by yoshi314 View Post
    how can one identify bottlenecks in video card driver? i always thought it's nigh impossible.

    is it mostly guesswork, or are there actual tools for the job?
    Using a profiler will give place where we spend time and then we can look and check how we can improve this (ie on r300 oprofile shows that we loose lot of time uploading vertex: no memory managment) then we can also hook some performance counter in the driver to see how much time it takes to do somethings like uploading texture or others things. Last things hw have performance counter register to help find out how to optimize things for the card (unfortunately we don't have any informations on this registers anyway today most of the slowdown is in the software driver). Last tools are things like gldebugger.

    Leave a comment:


  • yoshi314
    replied
    There are other bottleneck like some GL stuff which are not handled well enough currently (enemy territory use some of this). Anyway there is work underway to address most of pitfall of current open source driver. What is hard is that we need to keep backward compatibility so we endup doing crazy things just to make sure we do not break old things (new drm should work with new Xserver, new Xserver should work with old drm, new dri driver should work with old drm, ...).
    how can one identify bottlenecks in video card driver? i always thought it's nigh impossible.

    is it mostly guesswork, or are there actual tools for the job?

    While the glass may not be full of water, there is a water truck on the away.
    now that i think about it, why do we need a whole truck to fill a glass? :>

    edit:
    on the away.
    on the away...ohh, i get it now.
    Last edited by yoshi314; 23 July 2007, 09:31 AM.

    Leave a comment:


  • glisse
    replied
    Originally posted by yoshi314 View Post
    i wonder who's primarily to blame for poor gfx performance on better cards - driver programmers, hw engineers, or perhaps linux kernel hackers?
    My opinion on this is that DRI/DRM needed some work especially on memory management and once we got memory management properly working and cleverly used i am sure we will have a speed boost (ie at the moment all vertex are memcpy at least once when program render anythings and this is costly, with proper memory management we should not need anymore to memcpy this). There are other bottleneck like some GL stuff which are not handled well enough currently (enemy territory use some of this). Anyway there is work underway to address most of pitfall of current open source driver. What is hard is that we need to keep backward compatibility so we endup doing crazy things just to make sure we do not break old things (new drm should work with new Xserver, new Xserver should work with old drm, new dri driver should work with old drm, ...).

    Leave a comment:


  • d2kx
    replied
    The guys that don't want more driver developers because they cost money.

    Leave a comment:


  • yoshi314
    replied
    What I mean is that GF8 has not the full performance on Linux at the moment, so AMD could beat nVidia.
    i wonder who's primarily to blame for poor gfx performance on better cards - driver programmers, hw engineers, or perhaps linux kernel hackers?

    Leave a comment:


  • d2kx
    replied
    And when the truck arrives, it may has overhauled the competition.

    Okay that's enough. It sounds philosophic and I don't like that. What I mean is that GF8 has not the full performance on Linux at the moment, so AMD could beat nVidia. That would be funny cause people would laugh at you if you would tell them that your fglrx driver is faster then the nvidia driver.

    Leave a comment:


  • Michael
    replied
    Originally posted by rolz View Post
    There really is no point in trying to see the glass as half full, we all know it's nearly empty.
    While the glass may not be full of water, there is a water truck on the away.

    Leave a comment:


  • rolz
    replied
    Originally posted by hdas View Post

    Meanwhile, XGL does work fine and stable with fglrx (no 3d of course). Even on a modest radeon 200m, its pretty good. On another note, the image quality of fglrx drivers is pretty good
    strange world you live in. If you don't mind random crashes and the accompanying loss of data.. and various other bugs.

    and Xgl is quite dead as codebase too iirc. Other than showing of beryl (and hoping) Xgl is no good.

    There really is no point in trying to see the glass as half full, we all know it's nearly empty.

    imho.

    Leave a comment:


  • Michael
    replied
    Originally posted by hdas View Post
    PS : After seeing the gtkperf benchmark (never heard of it before , I installed it and was happy to see it giving 160 seconds on the geforce 7400 go in my notebook . Btw - were the tests performed in that same small default window or full screen ?
    Default. The only change was setting the number of times to 1,000.

    Leave a comment:


  • yoshi314
    replied
    Aside, its not just Linux, ATI's performance in OSX is equally horrible . My sister has a iMac with a x1600, on which doom3 @640x480 gives about 50 fps in OSX, a little better in linux (55fps or so, and a little smoother too - it feels like fps is capped or something ;-)
    hmm could it be that these cards are designed primarily with directx in mind?

    just a random thought.

    Leave a comment:

Working...
X