Announcement

Collapse
No announcement yet.

Considering a new GPU soon. How's the 7700 series on Linux?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • bridgman
    replied
    Driver optimization is definitely a case of *seriously* diminishing returns as you go further up the curve. I haven't done any curve-fitting recently but there's no question that the first 5-10% of the work can give you maybe 60-70% of the satisfaction *if* you choose the right 10% to work on.

    I've been pretty happy running low-midrange cards with the open source graphics driver (HD 5670 was the last card I bought) and getting decent performance, with a few caveats :

    1. I don't have enough free time to do much gaming, so my workloads are biased toward the less-performance-critical

    2. One of the important optimization tasks is either reducing CPU overhead or spreading the overhead across multiple threads (which has pretty much the same effect if you have multiple cores), and a faster GPU doesn't help in cases where you are CPU limited.

    3. There is some fairly low-hanging fruit that comes from identifying "really slow" cases where either the driver doesn't accelerate a certain function as much as it could or there are side effects from the current acceleration (lots of memory copies etc...)... again, not all of these are helped by faster hardware although many of them are.

    Some of (2) has already been done (eg Marek added multithreading last summer) and it's probably fair to say that a small amount of (3) happened last year but it's just started to really ramp up recently.

    Leave a comment:


  • maldorordiscord
    replied
    Originally posted by bridgman View Post
    Tough question. The open source driver is maybe 1/100th the size and complexity of the proprietary driver (closer to 1/30th if you include all the Mesa common code) and in the areas where performance depends on cubic developer-years of optimization the open source driver is likely to always be slower simply because the proprietary drivers share code across all OSes and development costs can be shared across almost 100% of the PC market.

    That said, I don't expect the difference to be that big, and I also expect there will be a number of workloads where you do get performance parity quickly. The initial performance estimates we made were based on having a couple of AMD developers and maybe 6-8 full-time-equivalent community developers (not the thousands of developers that were being talked about ). Right now the number of AMD developers working on 2D/3D performance is pretty much what we planned (we hired more devs than originally planned but they aren't all working on 3D graphics) and the community developer pool is a bit smaller than we had expected.

    Performance gains are running maybe 12-18 months behind what I expected (which pretty much fits the difference in #developers), but all indications are still that going from "blob is 3x as fast" to "blob is 1.5x as fast" (roughly where r300g seems to be today on 5xx hardware) should happen fairly quickly, say within a year. What I don't know is whether the r600g driver is going to need a fancier shader compiler to get there.

    So... yes, I think the current model is sustainable. It's easy to forget that the devs have implemented support for ~10 years of hardware (2002-2012) in less than 5 years of development (2007-2012) and that now new hardware support is close to being "caught up" relatively more of that effort can go into features, performance, etc...
    calculated with your numbers the result in a logical conclusion is:
    The cost for an open-source drivers are (1/30)of the costs for 66% of the speed.
    This means you only need 33% faster hardware to save (29/30) of the development costs.
    this is the difference between a standard hd7950-3gb vram-900mhz to a hd7970-1000mhz clockspeed-3gb-vram1400mhz vram clock speed.
    this is the difference between 306,42? and 406,?
    I don't know how much the development costs of both drivers are but for consumers its only ~100? per "high-end-card" difference.
    In my point of view AMD should drop the closed source driver and save the development money and go with the much cheaper open-source driver model.
    But maybe I'm wrong because development a driver is so cheap that they earn so much profit on this 100? difference per card.
    But if driver development is so cheap then the closed source driver is even more a fake.

    Leave a comment:


  • bridgman
    replied
    Originally posted by barkas View Post
    That leads to the question if the present low manpower way in which the OSS driver is built is sustainable or if it will always be more stable but much slower than the blob. I'm primarily benchmarking xbmc and I think the blob is about 3 times as fast as the OSS driver there.
    Tough question. The open source driver is maybe 1/100th the size and complexity of the proprietary driver (closer to 1/30th if you include all the Mesa common code) and in the areas where performance depends on cubic developer-years of optimization the open source driver is likely to always be slower simply because the proprietary drivers share code across all OSes and development costs can be shared across almost 100% of the PC market.

    That said, I don't expect the difference to be that big, and I also expect there will be a number of workloads where you do get performance parity quickly. The initial performance estimates we made were based on having a couple of AMD developers and maybe 6-8 full-time-equivalent community developers (not the thousands of developers that were being talked about ). Right now the number of AMD developers working on 2D/3D performance is pretty much what we planned (we hired more devs than originally planned but they aren't all working on 3D graphics) and the community developer pool is a bit smaller than we had expected.

    Performance gains are running maybe 12-18 months behind what I expected (which pretty much fits the difference in #developers), but all indications are still that going from "blob is 3x as fast" to "blob is 1.5x as fast" (roughly where r300g seems to be today on 5xx hardware) should happen fairly quickly, say within a year. What I don't know is whether the r600g driver is going to need a fancier shader compiler to get there.

    So... yes, I think the current model is sustainable. It's easy to forget that the devs have implemented support for ~10 years of hardware (2002-2012) in less than 5 years of development (2007-2012) and that now new hardware support is close to being "caught up" relatively more of that effort can go into features, performance, etc...
    Last edited by bridgman; 12 July 2012, 06:32 PM.

    Leave a comment:


  • Dandel
    replied
    Originally posted by barkas View Post
    Those things have certainly gotten much faster and very much harder to write a driver for.
    That leads to the question if the present low manpower way in which the OSS driver is built is sustainable or if it will always be more stable but much slower than the blob. I'm primarily benchmarking xbmc and I think the blob is about 3 times as fast as the OSS driver there.
    I would take benchmarks with a grain of salt... After all, often times the binary driver could easily outright lie, crash, produce incorrect results ( bugs mainly), and various other things.

    Leave a comment:


  • barkas
    replied
    Originally posted by bridgman View Post
    I suspect the present OSS driver is actually faster than older versions one on todays workloads, and that the driver is being asked to do more work than before in order to provide a slicker looking UI. Even adding a compositor makes a big change in the driver workload.

    A much bigger issue is that older GPUs dedicated a big chunk of die area for optimized 2D acceleration hardware, while most modern GPUs use the 3D engine for pretty much everything and don't even *have* 2D hardware. In our case, the R5xx and RS6xx generations were the last ones with 2D acceleration hardware .

    Performance on the kind of benchmarks you ran in 2002 is probably lower on modern hardware, but that's a hardware change not a driver change.
    Those things have certainly gotten much faster and very much harder to write a driver for.
    That leads to the question if the present low manpower way in which the OSS driver is built is sustainable or if it will always be more stable but much slower than the blob. I'm primarily benchmarking xbmc and I think the blob is about 3 times as fast as the OSS driver there.

    Leave a comment:


  • bridgman
    replied
    I suspect the present OSS driver is actually faster than older versions one on todays workloads, and that the driver is being asked to do more work than before in order to provide a slicker looking UI. Even adding a compositor makes a big change in the driver workload.

    A much bigger issue is that older GPUs dedicated a big chunk of die area for optimized 2D acceleration hardware, while most modern GPUs use the 3D engine for pretty much everything and don't even *have* 2D hardware. In our case, the R5xx and RS6xx generations were the last ones with 2D acceleration hardware .

    Performance on the kind of benchmarks you ran in 2002 is probably lower on modern hardware, but that's a hardware change not a driver change.
    Last edited by bridgman; 12 July 2012, 05:53 PM.

    Leave a comment:


  • barkas
    replied
    Originally posted by bridgman View Post
    My understanding was that we stopped providing support for open drivers around 2002, when the fglrx driver was first introduced with a Linux-specific code base. I was told the information flow basically stopped after r300 2D and before r300 3D.

    There are three driver architectures under discussion here, not two :

    - open source driver, supported with info from ATI until ~2002, support restarted in 2007
    - proprietary Linux-only driver, starting with r200 and the primary option for r300, ~2002 through 2004
    - proprietary Linux driver code sharing with other OSes, incremental transition between 2004 and 2007 then stable-ish architecture from 2007 on
    Your dates are probably more accurate than my memory. Anyway the first OSS driver was best, followed by the present OSS driver, which is good, if very slow sometimes. The first proprietary was certainly the worst. The present isn't great, but better.

    @kano: When AMD took over, it got better in my opinion.

    Leave a comment:


  • Kano
    replied
    It makes even more sense when you know that end of 2006 ati was integrated into amd

    Leave a comment:


  • bridgman
    replied
    My understanding was that we stopped providing support for open drivers around 2002, when the fglrx driver was first introduced with a Linux-specific code base. I was told the information flow basically stopped after r300 2D and before r300 3D.

    There are three driver architectures under discussion here, not two :

    - open source driver, supported with info from ATI until ~2002, support restarted in 2007
    - proprietary Linux-only driver, starting with r200 and the primary option for r300, ~2002 through 2004
    - proprietary Linux driver code sharing with other OSes, incremental transition between 2004 and 2007 then stable-ish architecture from 2007 on

    Leave a comment:


  • barkas
    replied
    Originally posted by bridgman View Post
    Thanks Paul. Normally I focus entirely on tomorrow, but when you said that we did something so terrible that you (and others) would never use AMD products again and I had no idea what you were talking about it seemed worth looking into.

    IIRC 2004 was when we were just starting to move the Linux drivers from a completely separate code base to a shared-code model so we could bring hardware support and features/performance to Linux users more quickly. That work started in 2004 (to get ready for r5xx in 2005) and ran through 2007, with the last big change (moving to a new OpenGL driver stack) coming in Sep 2007.
    In my memory the really old driver was fine, up to 2004 when ATI stopped giving information to DRI developers, as far as I know that was tungsten graphics back then.
    The point where you write that work started in 2004 up until 2007, that was the very very bad time. No real open source anymore, and the crappy blob instead.
    Since documentation is open sourced it slowly gets better, but it's still bad.
    I admit that fglrx has gotten a little better over time but I still consider it almost unusable.
    Last edited by barkas; 12 July 2012, 11:34 AM.

    Leave a comment:

Working...
X