Announcement

Collapse
No announcement yet.

Considering a new GPU soon. How's the 7700 series on Linux?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by barkas View Post
    Those things have certainly gotten much faster and very much harder to write a driver for.
    That leads to the question if the present low manpower way in which the OSS driver is built is sustainable or if it will always be more stable but much slower than the blob. I'm primarily benchmarking xbmc and I think the blob is about 3 times as fast as the OSS driver there.
    I would take benchmarks with a grain of salt... After all, often times the binary driver could easily outright lie, crash, produce incorrect results ( bugs mainly), and various other things.

    Comment


    • Originally posted by barkas View Post
      That leads to the question if the present low manpower way in which the OSS driver is built is sustainable or if it will always be more stable but much slower than the blob. I'm primarily benchmarking xbmc and I think the blob is about 3 times as fast as the OSS driver there.
      Tough question. The open source driver is maybe 1/100th the size and complexity of the proprietary driver (closer to 1/30th if you include all the Mesa common code) and in the areas where performance depends on cubic developer-years of optimization the open source driver is likely to always be slower simply because the proprietary drivers share code across all OSes and development costs can be shared across almost 100% of the PC market.

      That said, I don't expect the difference to be that big, and I also expect there will be a number of workloads where you do get performance parity quickly. The initial performance estimates we made were based on having a couple of AMD developers and maybe 6-8 full-time-equivalent community developers (not the thousands of developers that were being talked about ). Right now the number of AMD developers working on 2D/3D performance is pretty much what we planned (we hired more devs than originally planned but they aren't all working on 3D graphics) and the community developer pool is a bit smaller than we had expected.

      Performance gains are running maybe 12-18 months behind what I expected (which pretty much fits the difference in #developers), but all indications are still that going from "blob is 3x as fast" to "blob is 1.5x as fast" (roughly where r300g seems to be today on 5xx hardware) should happen fairly quickly, say within a year. What I don't know is whether the r600g driver is going to need a fancier shader compiler to get there.

      So... yes, I think the current model is sustainable. It's easy to forget that the devs have implemented support for ~10 years of hardware (2002-2012) in less than 5 years of development (2007-2012) and that now new hardware support is close to being "caught up" relatively more of that effort can go into features, performance, etc...
      Last edited by bridgman; 12 July 2012, 06:32 PM.
      Test signature

      Comment


      • Originally posted by bridgman View Post
        Tough question. The open source driver is maybe 1/100th the size and complexity of the proprietary driver (closer to 1/30th if you include all the Mesa common code) and in the areas where performance depends on cubic developer-years of optimization the open source driver is likely to always be slower simply because the proprietary drivers share code across all OSes and development costs can be shared across almost 100% of the PC market.

        That said, I don't expect the difference to be that big, and I also expect there will be a number of workloads where you do get performance parity quickly. The initial performance estimates we made were based on having a couple of AMD developers and maybe 6-8 full-time-equivalent community developers (not the thousands of developers that were being talked about ). Right now the number of AMD developers working on 2D/3D performance is pretty much what we planned (we hired more devs than originally planned but they aren't all working on 3D graphics) and the community developer pool is a bit smaller than we had expected.

        Performance gains are running maybe 12-18 months behind what I expected (which pretty much fits the difference in #developers), but all indications are still that going from "blob is 3x as fast" to "blob is 1.5x as fast" (roughly where r300g seems to be today on 5xx hardware) should happen fairly quickly, say within a year. What I don't know is whether the r600g driver is going to need a fancier shader compiler to get there.

        So... yes, I think the current model is sustainable. It's easy to forget that the devs have implemented support for ~10 years of hardware (2002-2012) in less than 5 years of development (2007-2012) and that now new hardware support is close to being "caught up" relatively more of that effort can go into features, performance, etc...
        calculated with your numbers the result in a logical conclusion is:
        The cost for an open-source drivers are (1/30)of the costs for 66% of the speed.
        This means you only need 33% faster hardware to save (29/30) of the development costs.
        this is the difference between a standard hd7950-3gb vram-900mhz to a hd7970-1000mhz clockspeed-3gb-vram1400mhz vram clock speed.
        this is the difference between 306,42? and 406,?
        I don't know how much the development costs of both drivers are but for consumers its only ~100? per "high-end-card" difference.
        In my point of view AMD should drop the closed source driver and save the development money and go with the much cheaper open-source driver model.
        But maybe I'm wrong because development a driver is so cheap that they earn so much profit on this 100? difference per card.
        But if driver development is so cheap then the closed source driver is even more a fake.

        Comment


        • Driver optimization is definitely a case of *seriously* diminishing returns as you go further up the curve. I haven't done any curve-fitting recently but there's no question that the first 5-10% of the work can give you maybe 60-70% of the satisfaction *if* you choose the right 10% to work on.

          I've been pretty happy running low-midrange cards with the open source graphics driver (HD 5670 was the last card I bought) and getting decent performance, with a few caveats :

          1. I don't have enough free time to do much gaming, so my workloads are biased toward the less-performance-critical

          2. One of the important optimization tasks is either reducing CPU overhead or spreading the overhead across multiple threads (which has pretty much the same effect if you have multiple cores), and a faster GPU doesn't help in cases where you are CPU limited.

          3. There is some fairly low-hanging fruit that comes from identifying "really slow" cases where either the driver doesn't accelerate a certain function as much as it could or there are side effects from the current acceleration (lots of memory copies etc...)... again, not all of these are helped by faster hardware although many of them are.

          Some of (2) has already been done (eg Marek added multithreading last summer) and it's probably fair to say that a small amount of (3) happened last year but it's just started to really ramp up recently.
          Test signature

          Comment


          • Originally posted by bridgman View Post
            Driver optimization is definitely a case of *seriously* diminishing returns as you go further up the curve. I haven't done any curve-fitting recently but there's no question that the first 5-10% of the work can give you maybe 60-70% of the satisfaction *if* you choose the right 10% to work on.

            I've been pretty happy running low-midrange cards with the open source graphics driver (HD 5670 was the last card I bought) and getting decent performance, with a few caveats :

            1. I don't have enough free time to do much gaming, so my workloads are biased toward the less-performance-critical

            2. One of the important optimization tasks is either reducing CPU overhead or spreading the overhead across multiple threads (which has pretty much the same effect if you have multiple cores), and a faster GPU doesn't help in cases where you are CPU limited.

            3. There is some fairly low-hanging fruit that comes from identifying "really slow" cases where either the driver doesn't accelerate a certain function as much as it could or there are side effects from the current acceleration (lots of memory copies etc...)... again, not all of these are helped by faster hardware although many of them are.

            Some of (2) has already been done (eg Marek added multithreading last summer) and it's probably fair to say that a small amount of (3) happened last year but it's just started to really ramp up recently.
            In my point of view business is all about economy and if the optimization of the last bits burn all the money "Driver optimization is definitely a case of *seriously* diminishing returns as you go further up the curve." then its not the smart way to do business.
            If it were my company I would make a study about worth it or not.
            I'm sure AMD don't even think in this rational logical way.
            My advice to AMD is: Cut costs to increase profit by act rational/logical by drop closed source drivers.

            Comment


            • Originally posted by Dandel View Post
              I would take benchmarks with a grain of salt... After all, often times the binary driver could easily outright lie, crash, produce incorrect results ( bugs mainly), and various other things.
              By benchmark I mean I watched a HD movie with xbmc and monitored the CPU load. Which is about 3 times that of the same setup using windows.

              @bridgman: sounds reasonable. I only wish it would go faster and some things really irritated me, like the hdmi audio thing. That was too slow by far. Also the GCN support takes a long time, I had hoped for more there.

              Comment


              • @bridgman

                Do you really need to buy gfx cards? What does amd do with all the test samples?

                Comment


                • @Bridgman

                  In the last optimized test a month ago (http://www.phoronix.com/scan.php?pag...ompete12&num=1 ), the low-end card (6450) was 1/2 to 1/6 of the blob.

                  That is a higher difference than with more powerful cards, which goes against what you've said earlier - that the high-end cards would have a larger difference to the blob.

                  What's your opinion on this?

                  Comment


                  • Originally posted by curaga View Post
                    In the last optimized test a month ago (http://www.phoronix.com/scan.php?pag...ompete12&num=1 ), the low-end card (6450) was 1/2 to 1/6 of the blob. That is a higher difference than with more powerful cards, which goes against what you've said earlier - that the high-end cards would have a larger difference to the blob. What's your opinion on this?
                    I have normally recommended mid-range cards rather than low end. The simple explanation for that is "open source drivers are currently slower so get a faster card", but there's actually more to it (as you noticed). Here we go...

                    On the high end it's hard to make full use of the hardware with open source drivers with the current level of driver optimization for *CPU* usage, ie there's a good chance the drivers will bottleneck on CPU before they fully utilize the GPU on most workloads. Note that as shader workloads become more complex high end cards become more attractive even without further driver optimizations.

                    On the low end the issue is that the die area allocation on low end part is usually optimized for different workloads than on the midrange and high end parts (all vendors, not just us). Specifically, the ratio between shader throughput (ALUs) and pixel/texel throughput (texture units / ROPs) changes, with low end parts having relatively less shader power compared to pixel-pushing ability. This makes sense if you think about it -- you don't expect low end parts to deliver the same gaming experience but you still expect basic 2D operations to happen snappity-quick. The differences are less drastic on newer discrete GPU families because integrated graphics have displaced the very low end discrete GPU market, but even on the HD 6xxx family there's maybe a 10:1 difference in shader throughput vs only 4:1 difference in pixel throughput.

                    This isn't particularly meaningful on its own, but if you combine the above point with the fact that shader compiler optimization is usually one of the last things to happen in an open source driver, you get the result that shader-intensive workloads on low end hardware with open source drivers are likely to bottleneck on the shader core first, while running the same workload on midrange hardware would be more likely to "bottleneck on everything at the same time" because the balance between hardware resources is different.

                    If you look at results on specific benchmarks, you'll see the relative performance on low-end hardware is higher on programs with simpler shaders. It's all about the *first* bottleneck you hit - low end cards tend to hit shader limits first, high end cards tend to hit CPU limits first. Make sense ?

                    Originally posted by Kano View Post
                    @bridgman

                    Do you really need to buy gfx cards? What does amd do with all the test samples?
                    The early engineering samples can be flakey and not something I would want in my home system. Since we have a unified driver, the later samples get kept around for testing new driver changes on older hardware. If I just wanted free hardware I could probably wait around until we EOL'ed HW generations and see what I could scrounge, but I didn't want to wait that long particularly since the open source drivers have caught up with newer hardware.

                    Originally posted by maldorordiscord View Post
                    In my point of view business is all about economy and if the optimization of the last bits burn all the money "Driver optimization is definitely a case of *seriously* diminishing returns as you go further up the curve." then its not the smart way to do business. If it were my company I would make a study about worth it or not. I'm sure AMD don't even think in this rational logical way. My advice to AMD is: Cut costs to increase profit by act rational/logical by drop closed source drivers.
                    We have this discussion every year or so. Problem is that a lot of PC industry buying decisions are still reliant on feature checklists and benchmark numbers. Having all the right answers to the questions isn't enough to get you the order, but missing a feature or being a few percent down on performance is enough to disqualify you. There's a theorem somewhere that says it's dangerous to be more rational and logical than your market.

                    This is part of the bigger challenge we all face, specifically that supply chains win by simplifying and reducing choices (eg "what the heck, why bother with Linux ?) even though the people who actually buy and use the hardware want more choice and more support for their specific needs. The result is arguably a "race to the bottom", eg where every bookstore sells the same top-30 best sellers and nothing else because that works best for *them* even if it sucks for the customer.

                    If you want to avoid that race to the bottom you need to play both short term and long term games at the same time, which may seem wasteful but may also be unavoidable for a while.
                    Last edited by bridgman; 13 July 2012, 07:42 AM.
                    Test signature

                    Comment


                    • Originally posted by bridgman View Post
                      We have this discussion every year or so. Problem is that a lot of PC industry buying decisions are still reliant on feature checklists and benchmark numbers. Having all the right answers to the questions isn't enough to get you the order, but missing a feature or being a few percent down on performance is enough to disqualify you. There's a theorem somewhere that says it's dangerous to be more rational and logical than your market.
                      LOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOL
                      But best explanation you have ever given to me!
                      Damn these stupid people damn I'm to rational and to logical for this kind of world damn!
                      Really just: damn...

                      Mankind just lost all hope.

                      Anyway if I buy a hd7970 can I get the 100€ "catalyst" closed-source-tax back if I promise I will use the open-source driver?
                      Can we please get a Catalyst incompatible firmware to make sure we don't need to pay the catalyst-closed-source-tax ?
                      AMD really should pay 100€(for a hd7970) compensation to opensource driver users!
                      In fact if I buy a AMD card I pay the (29/30)catalyst-closed-source-tax and I only get (1/30) open-source improvement for my money :-( this is so sad.

                      This catalyst-closed-source-tax is just unfair!
                      Last edited by maldorordiscord; 13 July 2012, 09:22 AM.

                      Comment

                      Working...
                      X