Announcement

Collapse
No announcement yet.

Considering a new GPU soon. How's the 7700 series on Linux?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Hmm.. there seems to be a bit of a religious war going on

    Not wanting to step into any doo-doo, I completely f'd up and need to purchase a video card for my new rig today. Some specs: Phenom IIx4 960 with Asus M5A97 mobo, 16GB of DDR3-1600 RAM, 120GB SSD, and a Corsair HX550W PSU. And, of course, I had an old nvidia 7300GS video card kicking around.. but I didn't realize that it's a short height bracket and i lost the full height bracket that's needed for my case

    The aim is to run ESXi, and I'm planning on using CentOS for one or more VMs. I don't intend on playing any games on here, but will probably have the odd video playback. I was thinking of getting a 7750 based card due to the attractive TDP and performance per W.

    Windows will also be installed on here at some point, but that's secondary.

    Is a 7750/7770 based card a good idea? I guess I want something relatively newer architecture because it'll be a while before this is upgraded, and I'm interested in messing around with OpenCL. So, in case It's not obvious, this is mostly a development rig so pure performance isn't critical, but I wouldn't want it to chug along too slowly.

    Thoughts? I pretty much want to purchase the card today or tomorrow so I can spend some quality time this weekend using it.

    Comment


    • #92
      Originally posted by PhuFighter View Post
      Hmm.. there seems to be a bit of a religious war going on

      Not wanting to step into any doo-doo, I completely f'd up and need to purchase a video card for my new rig today. Some specs: Phenom IIx4 960 with Asus M5A97 mobo, 16GB of DDR3-1600 RAM, 120GB SSD, and a Corsair HX550W PSU. And, of course, I had an old nvidia 7300GS video card kicking around.. but I didn't realize that it's a short height bracket and i lost the full height bracket that's needed for my case

      The aim is to run ESXi, and I'm planning on using CentOS for one or more VMs. I don't intend on playing any games on here, but will probably have the odd video playback. I was thinking of getting a 7750 based card due to the attractive TDP and performance per W.

      Windows will also be installed on here at some point, but that's secondary.

      Is a 7750/7770 based card a good idea? I guess I want something relatively newer architecture because it'll be a while before this is upgraded, and I'm interested in messing around with OpenCL. So, in case It's not obvious, this is mostly a development rig so pure performance isn't critical, but I wouldn't want it to chug along too slowly.

      Thoughts? I pretty much want to purchase the card today or tomorrow so I can spend some quality time this weekend using it.
      See my previous post... Although, the 7750/7770 is an extremely good idea because both include the option to perform double precision floating point calculations within the graphics card. This is extremely good if you want to do math that requires precision.

      Comment


      • #93
        Originally posted by PhuFighter View Post
        Hmm.. there seems to be a bit of a religious war going on

        Not wanting to step into any doo-doo, I completely f'd up and need to purchase a video card for my new rig today. Some specs: Phenom IIx4 960 with Asus M5A97 mobo, 16GB of DDR3-1600 RAM, 120GB SSD, and a Corsair HX550W PSU. And, of course, I had an old nvidia 7300GS video card kicking around.. but I didn't realize that it's a short height bracket and i lost the full height bracket that's needed for my case

        The aim is to run ESXi, and I'm planning on using CentOS for one or more VMs. I don't intend on playing any games on here, but will probably have the odd video playback. I was thinking of getting a 7750 based card due to the attractive TDP and performance per W.

        Windows will also be installed on here at some point, but that's secondary.

        Is a 7750/7770 based card a good idea? I guess I want something relatively newer architecture because it'll be a while before this is upgraded, and I'm interested in messing around with OpenCL. So, in case It's not obvious, this is mostly a development rig so pure performance isn't critical, but I wouldn't want it to chug along too slowly.

        Thoughts? I pretty much want to purchase the card today or tomorrow so I can spend some quality time this weekend using it.
        I do hope you know that ESXi is headless save for some text mode basic configuration stuff.

        Comment


        • #94
          Originally posted by barkas View Post
          I do hope you know that ESXi is headless save for some text mode basic configuration stuff.
          Yes. I have it installed on my servers at work now. I am just putting together a system at home so that i don't have to comply wiht all of the security policies from the office. The idea is to have all of the ESXi VMs (for dev purposes) boot off of the SSD, while windows, etc. boot off of a HDD.

          Comment


          • #95
            Originally posted by bridgman View Post
            Thanks Paul. Normally I focus entirely on tomorrow, but when you said that we did something so terrible that you (and others) would never use AMD products again and I had no idea what you were talking about it seemed worth looking into.

            IIRC 2004 was when we were just starting to move the Linux drivers from a completely separate code base to a shared-code model so we could bring hardware support and features/performance to Linux users more quickly. That work started in 2004 (to get ready for r5xx in 2005) and ran through 2007, with the last big change (moving to a new OpenGL driver stack) coming in Sep 2007.
            In my memory the really old driver was fine, up to 2004 when ATI stopped giving information to DRI developers, as far as I know that was tungsten graphics back then.
            The point where you write that work started in 2004 up until 2007, that was the very very bad time. No real open source anymore, and the crappy blob instead.
            Since documentation is open sourced it slowly gets better, but it's still bad.
            I admit that fglrx has gotten a little better over time but I still consider it almost unusable.
            Last edited by barkas; 07-12-2012, 11:34 AM.

            Comment


            • #96
              My understanding was that we stopped providing support for open drivers around 2002, when the fglrx driver was first introduced with a Linux-specific code base. I was told the information flow basically stopped after r300 2D and before r300 3D.

              There are three driver architectures under discussion here, not two :

              - open source driver, supported with info from ATI until ~2002, support restarted in 2007
              - proprietary Linux-only driver, starting with r200 and the primary option for r300, ~2002 through 2004
              - proprietary Linux driver code sharing with other OSes, incremental transition between 2004 and 2007 then stable-ish architecture from 2007 on

              Comment


              • #97
                It makes even more sense when you know that end of 2006 ati was integrated into amd

                Comment


                • #98
                  Originally posted by bridgman View Post
                  My understanding was that we stopped providing support for open drivers around 2002, when the fglrx driver was first introduced with a Linux-specific code base. I was told the information flow basically stopped after r300 2D and before r300 3D.

                  There are three driver architectures under discussion here, not two :

                  - open source driver, supported with info from ATI until ~2002, support restarted in 2007
                  - proprietary Linux-only driver, starting with r200 and the primary option for r300, ~2002 through 2004
                  - proprietary Linux driver code sharing with other OSes, incremental transition between 2004 and 2007 then stable-ish architecture from 2007 on
                  Your dates are probably more accurate than my memory. Anyway the first OSS driver was best, followed by the present OSS driver, which is good, if very slow sometimes. The first proprietary was certainly the worst. The present isn't great, but better.

                  @kano: When AMD took over, it got better in my opinion.

                  Comment


                  • #99
                    I suspect the present OSS driver is actually faster than older versions one on todays workloads, and that the driver is being asked to do more work than before in order to provide a slicker looking UI. Even adding a compositor makes a big change in the driver workload.

                    A much bigger issue is that older GPUs dedicated a big chunk of die area for optimized 2D acceleration hardware, while most modern GPUs use the 3D engine for pretty much everything and don't even *have* 2D hardware. In our case, the R5xx and RS6xx generations were the last ones with 2D acceleration hardware .

                    Performance on the kind of benchmarks you ran in 2002 is probably lower on modern hardware, but that's a hardware change not a driver change.
                    Last edited by bridgman; 07-12-2012, 05:53 PM.

                    Comment


                    • Originally posted by bridgman View Post
                      I suspect the present OSS driver is actually faster than older versions one on todays workloads, and that the driver is being asked to do more work than before in order to provide a slicker looking UI. Even adding a compositor makes a big change in the driver workload.

                      A much bigger issue is that older GPUs dedicated a big chunk of die area for optimized 2D acceleration hardware, while most modern GPUs use the 3D engine for pretty much everything and don't even *have* 2D hardware. In our case, the R5xx and RS6xx generations were the last ones with 2D acceleration hardware .

                      Performance on the kind of benchmarks you ran in 2002 is probably lower on modern hardware, but that's a hardware change not a driver change.
                      Those things have certainly gotten much faster and very much harder to write a driver for.
                      That leads to the question if the present low manpower way in which the OSS driver is built is sustainable or if it will always be more stable but much slower than the blob. I'm primarily benchmarking xbmc and I think the blob is about 3 times as fast as the OSS driver there.

                      Comment


                      • Originally posted by barkas View Post
                        Those things have certainly gotten much faster and very much harder to write a driver for.
                        That leads to the question if the present low manpower way in which the OSS driver is built is sustainable or if it will always be more stable but much slower than the blob. I'm primarily benchmarking xbmc and I think the blob is about 3 times as fast as the OSS driver there.
                        I would take benchmarks with a grain of salt... After all, often times the binary driver could easily outright lie, crash, produce incorrect results ( bugs mainly), and various other things.

                        Comment


                        • Originally posted by barkas View Post
                          That leads to the question if the present low manpower way in which the OSS driver is built is sustainable or if it will always be more stable but much slower than the blob. I'm primarily benchmarking xbmc and I think the blob is about 3 times as fast as the OSS driver there.
                          Tough question. The open source driver is maybe 1/100th the size and complexity of the proprietary driver (closer to 1/30th if you include all the Mesa common code) and in the areas where performance depends on cubic developer-years of optimization the open source driver is likely to always be slower simply because the proprietary drivers share code across all OSes and development costs can be shared across almost 100% of the PC market.

                          That said, I don't expect the difference to be that big, and I also expect there will be a number of workloads where you do get performance parity quickly. The initial performance estimates we made were based on having a couple of AMD developers and maybe 6-8 full-time-equivalent community developers (not the thousands of developers that were being talked about ). Right now the number of AMD developers working on 2D/3D performance is pretty much what we planned (we hired more devs than originally planned but they aren't all working on 3D graphics) and the community developer pool is a bit smaller than we had expected.

                          Performance gains are running maybe 12-18 months behind what I expected (which pretty much fits the difference in #developers), but all indications are still that going from "blob is 3x as fast" to "blob is 1.5x as fast" (roughly where r300g seems to be today on 5xx hardware) should happen fairly quickly, say within a year. What I don't know is whether the r600g driver is going to need a fancier shader compiler to get there.

                          So... yes, I think the current model is sustainable. It's easy to forget that the devs have implemented support for ~10 years of hardware (2002-2012) in less than 5 years of development (2007-2012) and that now new hardware support is close to being "caught up" relatively more of that effort can go into features, performance, etc...
                          Last edited by bridgman; 07-12-2012, 06:32 PM.

                          Comment


                          • Originally posted by bridgman View Post
                            Tough question. The open source driver is maybe 1/100th the size and complexity of the proprietary driver (closer to 1/30th if you include all the Mesa common code) and in the areas where performance depends on cubic developer-years of optimization the open source driver is likely to always be slower simply because the proprietary drivers share code across all OSes and development costs can be shared across almost 100% of the PC market.

                            That said, I don't expect the difference to be that big, and I also expect there will be a number of workloads where you do get performance parity quickly. The initial performance estimates we made were based on having a couple of AMD developers and maybe 6-8 full-time-equivalent community developers (not the thousands of developers that were being talked about ). Right now the number of AMD developers working on 2D/3D performance is pretty much what we planned (we hired more devs than originally planned but they aren't all working on 3D graphics) and the community developer pool is a bit smaller than we had expected.

                            Performance gains are running maybe 12-18 months behind what I expected (which pretty much fits the difference in #developers), but all indications are still that going from "blob is 3x as fast" to "blob is 1.5x as fast" (roughly where r300g seems to be today on 5xx hardware) should happen fairly quickly, say within a year. What I don't know is whether the r600g driver is going to need a fancier shader compiler to get there.

                            So... yes, I think the current model is sustainable. It's easy to forget that the devs have implemented support for ~10 years of hardware (2002-2012) in less than 5 years of development (2007-2012) and that now new hardware support is close to being "caught up" relatively more of that effort can go into features, performance, etc...
                            calculated with your numbers the result in a logical conclusion is:
                            The cost for an open-source drivers are (1/30)of the costs for 66% of the speed.
                            This means you only need 33% faster hardware to save (29/30) of the development costs.
                            this is the difference between a standard hd7950-3gb vram-900mhz to a hd7970-1000mhz clockspeed-3gb-vram1400mhz vram clock speed.
                            this is the difference between 306,42 and 406,
                            I don't know how much the development costs of both drivers are but for consumers its only ~100 per "high-end-card" difference.
                            In my point of view AMD should drop the closed source driver and save the development money and go with the much cheaper open-source driver model.
                            But maybe I'm wrong because development a driver is so cheap that they earn so much profit on this 100 difference per card.
                            But if driver development is so cheap then the closed source driver is even more a fake.

                            Comment


                            • Driver optimization is definitely a case of *seriously* diminishing returns as you go further up the curve. I haven't done any curve-fitting recently but there's no question that the first 5-10% of the work can give you maybe 60-70% of the satisfaction *if* you choose the right 10% to work on.

                              I've been pretty happy running low-midrange cards with the open source graphics driver (HD 5670 was the last card I bought) and getting decent performance, with a few caveats :

                              1. I don't have enough free time to do much gaming, so my workloads are biased toward the less-performance-critical

                              2. One of the important optimization tasks is either reducing CPU overhead or spreading the overhead across multiple threads (which has pretty much the same effect if you have multiple cores), and a faster GPU doesn't help in cases where you are CPU limited.

                              3. There is some fairly low-hanging fruit that comes from identifying "really slow" cases where either the driver doesn't accelerate a certain function as much as it could or there are side effects from the current acceleration (lots of memory copies etc...)... again, not all of these are helped by faster hardware although many of them are.

                              Some of (2) has already been done (eg Marek added multithreading last summer) and it's probably fair to say that a small amount of (3) happened last year but it's just started to really ramp up recently.

                              Comment


                              • Originally posted by bridgman View Post
                                Driver optimization is definitely a case of *seriously* diminishing returns as you go further up the curve. I haven't done any curve-fitting recently but there's no question that the first 5-10% of the work can give you maybe 60-70% of the satisfaction *if* you choose the right 10% to work on.

                                I've been pretty happy running low-midrange cards with the open source graphics driver (HD 5670 was the last card I bought) and getting decent performance, with a few caveats :

                                1. I don't have enough free time to do much gaming, so my workloads are biased toward the less-performance-critical

                                2. One of the important optimization tasks is either reducing CPU overhead or spreading the overhead across multiple threads (which has pretty much the same effect if you have multiple cores), and a faster GPU doesn't help in cases where you are CPU limited.

                                3. There is some fairly low-hanging fruit that comes from identifying "really slow" cases where either the driver doesn't accelerate a certain function as much as it could or there are side effects from the current acceleration (lots of memory copies etc...)... again, not all of these are helped by faster hardware although many of them are.

                                Some of (2) has already been done (eg Marek added multithreading last summer) and it's probably fair to say that a small amount of (3) happened last year but it's just started to really ramp up recently.
                                In my point of view business is all about economy and if the optimization of the last bits burn all the money "Driver optimization is definitely a case of *seriously* diminishing returns as you go further up the curve." then its not the smart way to do business.
                                If it were my company I would make a study about worth it or not.
                                I'm sure AMD don't even think in this rational logical way.
                                My advice to AMD is: Cut costs to increase profit by act rational/logical by drop closed source drivers.

                                Comment

                                Working...
                                X