Announcement

Collapse
No announcement yet.

The Interesting Tale Of AMD's FirePro Drivers

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by deanjo View Post
    Nobody was arguing the cost, just disputing the statement that "optimising code usually creates unmaintainable code", "optimizing usually means messier code", "Assembly would be the best option to optimize, but it is not maintanable at all.". All statements are simply false as they are just as maintainable as any other code. More work, yes, but hardly "unmaintainable" and may just be worth the effort if the end result gives better dividends in the end.
    As far as I like ASM, it's far from being maintainable as any high level language is, expecially if it is object oriented. Give me assembly with overloading, interface definition, abstract classes, run-time binding and other fancy things and it will be much more maintainable (but also slower?).
    Of course if you intend optimizing in the sense of "given an algorithm, try to run it as fas as possible". You may also optimize a problem using different and more elaborated algorithms with lower computational complexity...

    Comment


    • #32
      @deanjo. i can't believe you're actually arguing this point. No one is saying it's not sometimes worth it - of course it is. And certain code can be optimized while leaving it quite maintainable. But there's undeniably a strong correlation between optimizing code and making it less maintainable. You can do lots of stuff like leaving comments to try and mitigate that issue, but it's still there.

      Check out this code here - http://www.virtualdub.org/blog/pivot...hp?id=307#body

      Now compare that with the non-optimized version that would simply be written in C. Are you seriously going to argue that the assembly version is easier to adjust? That a low-paid kid straight out of college would be able to instantly understand everything that's going on in the assembly version to adjust it? That he wouldn't accidentally stick in a PALIGNR instead of using the MOVSS + SHUFPS, thereby causing a performance regression on some systems (while speeding up other), and that he would immediately notice this happening?

      Comment


      • #33
        Now if you're just talking about crappily written code, there can indeed be massive simplifications done which result in algorithmic optimizations to the code. Perhaps that's what you're referring to?

        Comment


        • #34
          Originally posted by deanjo View Post
          optimized != unmaintainable
          There are humans, which have their own definition of "easiness".

          And there are chips, with own cfg and own definitions of "easiness".

          Take an example.

          For humans, to write 20 times "Hello" the easiest code (for mind and eyes) would be "Write 20 times Hello".

          For chips an assembly code will be much easier, which is not human language and only people that are skilled in it will understand.

          This is to put contrast on interpreter, that is running and translating "Write 20 times Hello" real-time in to binary.

          Of course the most efficient option is to provide optimized(ie unrolled) binary version of "Write Hello" using best combination of all available instructions and registers for specific chip. This would be something that is very very easy for the chip, but at all not for a human.

          Humans have their own definitions of easiness (efficiency), chips have their own. Optimization for chips can stay very understandable for humans to some major degree. But then it will shift to different level, exchanging understandable constructions to "why would you do that", blending logic away. And you need to understand chips very good to do that.

          Which means, to YOU this code will be pretty maintainable, but not for average programmer Besides, since the overall consistency of logic was given up for performance, this kind of optimized result will be much harder to maintain in long time scope - it will require more input, more understanding, more time for same changes compared to understandable version. This is main reason optimization has being issued least importance nowadays, with focus on clean code and letting opt-compiler and profiler do the job on end-result.

          Comment


          • #35
            @Crazycheese: Yes that's my point. Unless you're an excellent programmer and so are the other programmers, then the optimisations might complicate things. I'm not assuming that this was the case. It just seems coincidental that the speed improvements regressed back to what they were after only one update and never went back again. All that boasting of 20% gain seems stupid to me. Maybe the optimisations were causing instability even (at runtime) and therefore went back. Who knows.

            Comment


            • #36
              Originally posted by smitty3268 View Post
              Now if you're just talking about crappily written code, there can indeed be massive simplifications done which result in algorithmic optimizations to the code. Perhaps that's what you're referring to?
              No what I'm referring to is that optimization is maintainable. Like every other situation it ultimately relies on the competency of the person trying to do the maintenance and their familiarity with what is being done to achieve that optimization. It is no more "unmaintainable" then getting a die hard VBasic programmer to maintain a python app. You simply have to have the right skill set.

              Comment


              • #37
                Originally posted by bug77 View Post
                I was only talking about performance. Of course features will be added and bugs fixed. But the performance stays pretty much on the same level.
                Instead, when a new cards hits the market and it can't match the competition on a performance level, all the fanboys go: "yeah, but the other card has been available for X months, just wait for the drivers to mature for the new card and then you'll see". It applies to both ATI/AMD and Nvidia fans.
                and there are enought reviews out there in the net that show that performance DOES increase between initial support and two or three driver releases later.

                On the windows platform.

                Especially with multi-card setups.

                Comment


                • #38
                  Originally posted by energyman View Post
                  and there are enought reviews out there in the net that show that performance DOES increase between initial support and two or three driver releases later.

                  On the windows platform.

                  Especially with multi-card setups.
                  Most of those performance increases come from application specific optimizations. That is something that isn't really done on linux drivers.

                  Comment


                  • #39
                    Grr stupid 1 minute limit.

                    When was the last time you have seen a change log on linux drivers that said something like:

                    - 17% increase in performance on Nexiuz on XYZ series cards
                    - 22% increase in performance on GIMP on XYZ series cards
                    - 37% increase in performance on ET:QW running in multicard setup on XYZ cards.

                    Comment


                    • #40
                      never, but when was the last time you saw something like:
                      - added support for kernel 2.6.XY
                      - added support for xorg-server version 1.uber-leet-experimental

                      in windows drivers?

                      Comment


                      • #41
                        Originally posted by energyman View Post
                        never, but when was the last time you saw something like:
                        - added support for kernel 2.6.XY
                        - added support for xorg-server version 1.uber-leet-experimental

                        in windows drivers?
                        That is adding support, not performance increases, and in the case of AMD's blob drivers even your examples are not that frequent.

                        Comment


                        • #42
                          you also don't see anything in the ati changelog that happens in the shared code base and benefits both sides. So if AMD tunes fpr a certain opengl app and that influences the linux side too you won't find it in the linux changelog.

                          Apart from that - what did Phoronix really test? Quake-based stuff, Specviewperf - but there is a lot more stuff.

                          Comment


                          • #43
                            Originally posted by energyman View Post
                            you also don't see anything in the ati changelog that happens in the shared code base and benefits both sides. So if AMD tunes fpr a certain opengl app and that influences the linux side too you won't find it in the linux changelog.
                            Actually IIRC, when they improved their 2D performance in windows many of those changes were brought across and mentioned in the linux release.

                            Comment


                            • #44
                              The general architecture of the ATI cards has been pretty similar since r600 was released, so i'm not surprised that the drivers were already tuned pretty well. The new DX11 features aren't being used in any of these tests, that would be where i would expect driver improvements to show up. The new 6000 series does finally switch to a new architecture, so it would be more interesting to track that and see if the performance picks up there or not. Same has been true of NVidia, using the same architecture for a long time up until a switch occurred with Fermi.

                              Comment


                              • #45
                                Originally posted by smitty3268 View Post
                                The general architecture of the ATI cards has been pretty similar since r600 was released, so i'm not surprised that the drivers were already tuned pretty well. The new DX11 features aren't being used in any of these tests, that would be where i would expect driver improvements to show up. The new 6000 series does finally switch to a new architecture, so it would be more interesting to track that and see if the performance picks up there or not. Same has been true of NVidia, using the same architecture for a long time up until a switch occurred with Fermi.
                                Well my argument is that ATi might well have more average code for their binary blob than nVidia. Simply because nVidia works better most of the time in Linux. They usually have much better binary drivers. Faster too. When compared to ATi. Things have changed for ATi in the past 3 years but I still feel that nVidia is way in front regarding their portable code management etc... Again, I have no idea as to what I'm talking about. Even if I'm an AMD fanboy. x)

                                Comment

                                Working...
                                X