Announcement

Collapse
No announcement yet.

AMD Radeon HD 5750/5770

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • @bridgman
    As I said, I'm not an expert.
    My statements and my questions arise from the information I gathered here and there on the Internet. So I am not able to give you a link. However I must say that I might have misunderstood the fact that ATI had declared its intention to release the specifications of its hardware in way to support the development of open software. So nothing to do with the opening of the code of their proprietary drivers ...or is it possible that I have read what people, even more confused than me, wrote.
    Anyway, now everything seems clarified.

    -------------------------------

    That said, I would like to continue with my questions (given your willingness to respond and to clarify).

    I read somewhere and maybe even on this same forum (and this time I think I can say this with certainty) that the open driver is pretty stable but the support for 3D acceleration is still low and the benefits far below those of proprietary driver (although the latter is less stable).

    As we have said, the open driver is growing day by day so the questions are:
    - 3D in the open driver will ever get to match the level of the proprietary driver?
    - Features, sooner or later, will be all implemented or not?

    Because, for example, if today I buy a HD5700 in crossfire with a HD5870 and I use them with Cinelerra or Blender at peak performance (making full use of OpenGL) for, hypothetically, about 3 or 4 years, I'd be very disappointed if once deprived of the Catalyst support, and forced to move to open drivers I end up with drastically reduced performances!

    Comment


    • Originally posted by bingel View Post
      - 3D in the open driver will ever get to match the level of the proprietary driver?
      We have released enough programming information to let the open drivers match or exceed the performance of the proprietary drivers, but the proprietary drivers have much larger development teams and that will have an impact on how much of that potential performance actually gets realized.

      The downside of a large development team, of course, is that you have to aggressively move GPUs onto a legacy or reduced support model, and that the support reduction is usually driven not by Linux needs but by the needs of other OSes.

      I expect you will see open source driver performance get quite close to proprietary driver performance for most workloads, but for the most shader-intensive or video-memory-intensive applications the proprietary drivers will probably always be faster because they can share development costs across multiple OSes.

      Originally posted by bingel View Post
      - Features, sooner or later, will be all implemented or not?
      I expect that generic features like level of OpenGL support will probably all get implemented over time, while proprietary features (particularly those unique to a single vendor) may not. There is not a lot of interest in features like Crossfire / SLI in the open source community today, but other features like PowerXpress / (whatever NVidia calls theirs ) are already being worked on (see airlied's work on switchable graphics).

      Originally posted by bingel View Post
      Because, for example, if today I buy a HD5700 in crossfire with a HD5870 and I use them with Cinelerra or Blender at peak performance (making full use of OpenGL) for, hypothetically, about 3 or 4 years, I'd be very disappointed if once deprived of the Catalyst support, and forced to move to open drivers I end up with drastically reduced performances!
      If you want to use any kind of multi-GPU rendering (us or the competition) you would not want to mix two cards that were so different in performance. There has been some progress in making better use of mismatched cards but you're always going to get the best results with two of the same card.

      Also, if you want to use a multi-GPU system professionally for more than 4 years under Linux, again whether ours or a competitors, you probably should be running on one of the enterprise or LTS distros, and might want to look at the professional SKUs where the support model is a bit different.

      The main point to remember is that the open source stack has been relatively stagnant for a decade or so, but in the last couple of years has "come back to life" as programming information for other GPUs has become available and as business interest in Linux use has started to spread from servers to business clients and consumer hardware. There have been very significant improvements over the last couple of years, and I expect that rapid progress to continue for a while.
      Test signature

      Comment


      • Originally posted by bridgman View Post
        Also, if you want to use a multi-GPU system professionally for more than 4 years under Linux, again whether ours or a competitors, you probably should be running on one of the enterprise or LTS distros, and might want to look at the professional SKUs where the support model is a bit different.
        Huh? Geforce 6 cards which came out in early 2004 are still supported with the latest xorg / kernels. No need for LTS or enterprise solutions there.

        Also

        There is not a lot of interest in features like Crossfire / SLI in the open source community today,
        that last part should read

        There is not a lot of interest in features like Crossfire / SLI in the open source development community today,
        Pretty much every owner of a crossfire / SLi system wants to use their hardwares full potential otherwise why would they have bought them in the first place.

        Comment


        • Sure, and we supported R300 which came out in early 2002 until recently. Whoever moved products to legacy most recently is the bad guy

          I don't understand your distinction between developer and user communities. The development community does the driver development, and so their interests are going to determine what goes into the driver. If the interests of the development community were to include multi-GPU support in the future (eg if users who are interested decide to get involved with development) then the development would happen and we would try to support the work.

          Not sure what your experience has been, but everyone I know who has purchased a multi-board system (Crossfire or SLI) has also tended to upgrade it fairly aggressively, so the cards tend to find new users (often 2 systems with a single card each) before proprietary driver support rolls off.

          Splitting into two single-card systems is not an option for a dual-GPU card, of course, although I guess it could work in a multi-head environment.
          Test signature

          Comment


          • My mistake, I do agree with your distinction. Never mind
            Test signature

            Comment


            • Originally posted by bridgman View Post
              Sure, and we supported R300 which came out in early 2002 until recently. Whoever moved products to legacy most recently is the bad guy
              And Nvidia is still supporting products build a decade ago, moved long ago to legacy and still updated for xorg/kernels but I was just using the GF6 series as an example to rebut your post of really should be using a enterprise/LTS/professional solution as these cards would realistically be the lowest of the low for multi card setups now days.

              Comment


              • Yep, so are we (and so is Intel, I guess). We're just doing it on the open source code base rather than the proprietary drivers, which I think makes more sense over time.

                We could probably plot the level of support vs age of the card for both vendors and agree on the results, and maybe even agree on the trends. Fortunately neither of us have that kind of free time
                Test signature

                Comment


                • Geez, I really miss being able to edit. The last part of the second sentence wasn't very clear at all. I was trying to make two different points and didn't cover either one very well :

                  1. Right now there is a non-trivial drop in 3D performance & functionality going from the proprietary to the open source drivers, but that gap will close over time.

                  2. If we did plot "support vs time" for legacy proprietary vs open source, the open source line would drop immediately and then stay relatively flat or even go up a bit, while the legacy proprietary line would not really drop at all for a while, but would angle down more sharply with the passage of time as the underlying OS deviated further from what the driver was written to support.

                  Put differently, right after dropping support from the mainstream driver the legacy proprietary option seems better, but that doesn't last and after a few years it's likely that the open source drivers will make more users more happy than the legacy proprietary ones.
                  Test signature

                  Comment


                  • Originally posted by bridgman View Post
                    Yep, so are we (and so is Intel, I guess). We're just doing it on the open source code base rather than the proprietary drivers, which I think makes more sense over time.
                    Sure it would have made sense IF the replacement solution was up to par already with their previous solution. Performance still badly lags on many of those sku's that are barely 3 years old that performed better with the blob. Until that same level is reached the blobs should have been maintained. Forcing them to stick to a older blob to get good performance and sacrificing items such as kernel upgrades which many times fixes issues with the older kernels on other supporting hardware (such as buggy SB600 southbridges, network adapters, etc) is a hard pill to swallow.

                    Comment


                    • The choise wasn't really that simple. The open source drivers were already better in everything except 3D and that covered relatively more users, so we took the open source route. That was probably the best choice for most users, but people who had recently purchased high end 5xx boards for use with Linux probably would not agree.
                      Test signature

                      Comment

                      Working...
                      X