Announcement

Collapse
No announcement yet.

The Interesting Tale Of AMD's FirePro Drivers

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    It very much looks like AMD spun there a bit. Or is it just coincidence that the performance spiked at the launches of the quarterly FireGL driver, and lowered back after that?

    Alternatively, the quarterly driver comes from somewhat different branch than the monthlies, and anyone wanting performance should be running the quarterlies.

    Comment


    • #17
      Originally posted by b15hop View Post
      You're surprised that, there is so much grey? The website isn't much better. I'm sure the website could be improved a bit too for that matter heh. =)
      Not surprised at all and I like the site as it is. I was just noting that for that particular table the grey shades seem to be used liberally. And with little thought about ergonomy I might add.

      Originally posted by b15hop View Post
      Regarding the drivers spiking 20% and then dropping in performance. My understanding from a programming perspective is that optimising code usually creates unmaintainable code. Verses code that is neat and easy to read, which can easily have new features added. It's an old conundrum to coding that goes way back. I think it's the reason why the wheel of compilers and languages keeps being reinvented. So the developers probably did include some speed improvements, which might have also made it more difficult to update the code, and so therefore regressed back to the older code. I could be wrong...
      It's got nothing to do with maintainability. You have to be a very poor programmer to hit that problem (you just have to document your code if it's not easily readable). What can happen is you find a smart and new optimization. You add it to the code and test it as best as you can. It works. Then you release the new driver and a bunch of people using applications you never thought of, start reporting problems. Then you either have to remove your optimization or add additional checks which will have some negative impact on performance.

      Comment


      • #18
        Originally posted by Michael View Post
        I've posted about this a couple times in other threads to readers wanting similar tables.... It's already implemented in the Phoronix Test Suite and OpenBenchmarking.org to auto-generate nicer tables. Here's an example of a completely auto-generated one right now:



        PTS takes care of figuring out everything and coming up with a table to highlight the differences. Once Phoronix.com is using the OpenBenchmarking.org-embedded graphs rather than static PNG files (within a couple weeks hopefully), those tables will be included, but I am not bothering with any stopgap measures in the meantime or anything that requires more manual work on my part.
        Yes, we've been hearing that for months now. I really don't understand why you can't take the 15 seconds it LITERALLY takes to massively improve your articles - do you not bother running a spellcheck because it's too much work?

        Anyway, in more constructive criticism: I think the simpler table posted earlier still looks way better than the PTS generated one. There's simply no need to have every version of the drivers have their own column, it makes it too wide to be useful. I assume that even if you took out the 1 row that changed (OpenGL version) it would still list them as separate columns? I see what you're trying to do here and the automated generation is neat but IMO it still needs some work.

        Comment


        • #19
          Originally posted by smitty3268 View Post
          I assume that even if you took out the 1 row that changed (OpenGL version) it would still list them as separate columns? I see what you're trying to do here and the automated generation is neat but IMO it still needs some work.
          It would still stay the same width since the display driver differs too. But if those two rows were removed, it would compact as much as possible.
          Michael Larabel
          http://www.michaellarabel.com/

          Comment


          • #20
            Originally posted by smitty3268 View Post
            it makes it too wide to be useful.
            Too wide to be easily readable, is what i meant to say. On a more positive note, it's massively better than the current status quo.

            Comment


            • #21
              Originally posted by Michael View Post
              It would still stay the same width since the display driver differs too. But if those two rows were removed, it would compact as much as possible.
              Oh, that's interesting. And good!

              On the chart shown, the issue is that you have to scan your eyes way across the page to get to the correct information. Perhaps an improvement could be to left-justify the text in certain circumstances (like > 3 columns wide). I'm just not sure if that would look good or not without seeing it.

              Comment


              • #22
                It's interesting to note that the width of the table would only be an issue in articles where you are testing a massive number of variations of the independent variable(s). If you're just testing Ubuntu 10.10 vs Arch Linux, that's only two variables and it would look really nice. If you're testing two years of monthly releases, you get a lot of wasted space.

                One possible optimization that could be generalized in the graph-generation logic is, if the number of variable columns is greater than a certain threshold, then don't include those variables as rows, but rather, list them in a separate table. I define a "variable column" as any data which changes from one column to the next. For example, the CPU used in this article has exactly one variable column: for all test profiles used, it's the same CPU. The massively variable rows in this table are the OpenGL version and the Catalyst driver version. Instead of making the whole table artificially wide, you could create another two-column table listing the OpenGL and Catalyst versions used as independent variables in each test. I hope I explained that clearly enough.

                Comment


                • #23
                  Another way to state what I just wrote (damn edit limit) is to say that, it is possible to distinguish between static data and variable data.

                  Static data remains constant despite changes in other variables, and indeed it has to be the same for every test. Usually, your hardware will be static data, although sometimes you will have two hardware columns if you're comparing the performance of different hardware.

                  Variable data is absolutely mandatory for any test to be meaningful, based on what I've seen. In other words, if you have no varying data, then if you ran e.g. OpenArena FPS test against your configuration, all you could say is "Yeah, that FPS number looks pretty good". To have some meat to analyze, you must have variables.

                  The problem is when you have a low number of variable fields (in this case, just the Catalyst Driver version and OpenGL version string), but a high number of variations of one or more of those fields. It would be great if the software could detect this case and break out the highly-variable data into a separate table.

                  I'm a programmer, so I kind of have to convince myself that this would even be possible to generalize before I try and suggest it to you, but on the other hand I am sure there are people out there smarter than me who have already done something like this effectively. I already envision complications if you are trying a full factorial of multiple variables against each other. A simple example is the 2x2 case: test variable X and Y in a boolean breakdown of {X=1,Y=0}, {X=1,Y=1}, {X=0,Y=0}, {X=0,Y=1}. So if you have an NGV PowerOn DH75999+ and you want to compare the performance between the NGV Initiator Linux Drivers from early 2011 and mid-2011, but you also want to compare performance with Smear-Free toggled on and off, you would have to describe this 2x2 variation succinctly in a table. Hmm.

                  Sounds like the easiest way to divide up the tables is to just have two tables regardless: one table containing static data, and one table containing variable data. This is only really necessary for readability purposes if you have a large number of variables (or combinations of variables), but it wouldn't hurt to do it this way all the time.

                  Comment


                  • #24
                    Originally posted by deanjo View Post
                    That is the biggest crock of crap I have read. A talented and competent programmer can make optimized and readable code. Of course the person reading the code has to have at least the understanding of what the optimizations do for it to make sense.
                    In an ideal world maybe. In real world optimizing usually means messier code. Assembly would be the best option to optimize, but it is not maintanable at all.
                    For example, maintanable code means that you need to respect the variables scope, and pass the variables you need everytime you call a method or a procedure, but using global variables would be much faster but also less maintanable.

                    Comment


                    • #25
                      Originally posted by blackshard View Post
                      In an ideal world maybe. In real world optimizing usually means messier code. Assembly would be the best option to optimize, but it is not maintanable at all.
                      For example, maintanable code means that you need to respect the variables scope, and pass the variables you need everytime you call a method or a procedure, but using global variables would be much faster but also less maintanable.
                      Haha, I have first-hand experience with global variable abuse. Old, hacked-together code for a 90s Windows game. A global variable string that basically gets overwritten hundreds or thousands of times per minute with any resources that are loaded from the string table, any debug messages, and any user-visible string rendered to the UI. Absolutely minimal memory usage, no heap allocation thrashing, but a clusterfsck of unmaintainable code. It'd be fine if they stuck to the structured programming paradigm of only reading the datum within the same scope as it was written to, but unfortunately there are hundreds of places in the code where the datum is written to, and assumptions about the code paths are made. You see functions all over the place reading the global variable, and you have no idea which function was supposed to have set that global variable at some point earlier in the execution path. Half the time the function that did the write is no longer on the call stack. Fun...

                      Comment


                      • #26
                        Originally posted by blackshard View Post
                        In an ideal world maybe. In real world optimizing usually means messier code. Assembly would be the best option to optimize, but it is not maintanable at all.
                        For example, maintanable code means that you need to respect the variables scope, and pass the variables you need everytime you call a method or a procedure, but using global variables would be much faster but also less maintanable.
                        I'm not denying that it can and often happens. That is at the programmers doing, but it isn't necessary to write optimized code. Even assembly allows comments.

                        Comment


                        • #27
                          Originally posted by deanjo View Post
                          I'm not denying that it can and often happens. That is at the programmers doing, but it isn't necessary to write optimized code. Even assembly allows comments.
                          Comments in assembly code won't make it more maintanable. Putting your hand on asm code after 1 year you don't watch it will require much more time and money than putting the hands on high level language in the same conditions.

                          Object oriented paradigm has been invented for maintanability, but it came with the expense of some speed.

                          Comment


                          • #28
                            Originally posted by blackshard View Post
                            Comments in assembly code won't make it more maintanable. Putting your hand on asm code after 1 year you don't watch it will require much more time and money than putting the hands on high level language in the same conditions.
                            Bullocks, I've been maintaining assembly for years. If the optimization is worth it is debatable as that is dependant on the programmer doing the maintaining. Nothing you have said has proven that "optimising code usually creates unmaintainable code"

                            optimized != unmaintainable

                            Comment


                            • #29
                              Originally posted by deanjo View Post
                              Bullocks, I've been maintaining assembly for years. If the optimization is worth it is debatable as that is dependant on the programmer doing the maintaining. Nothing you have said has proven that "optimising code usually creates unmaintainable code"

                              optimized != unmaintainable
                              Happy for you, but a statistic made of a single element is not a valid statistic.
                              You said that the maintenance depends upon the programmer doing that, and that's true, but in general debugging and maintaining asm code is costly (in terms of time and money) than maintaining C code, and maintaining C code is costly than maintaining Java/C#/(put here your preferred OO language).
                              I never said that asm code is impossibile to maintain (Did I?), I just said that it's more difficult to do, and costs more.
                              Also, asm code was just an example, as the global scope variables was.

                              Comment


                              • #30
                                Originally posted by blackshard View Post
                                Happy for you, but a statistic made of a single element is not a valid statistic.
                                You said that the maintenance depends upon the programmer doing that, and that's true, but in general debugging and maintaining asm code is costly (in terms of time and money) than maintaining C code, and maintaining C code is costly than maintaining Java/C#/(put here your preferred OO language).
                                I never said that asm code is impossibile to maintain (Did I?), I just said that it's more difficult to do, and costs more.
                                Also, asm code was just an example, as the global scope variables was.
                                Nobody was arguing the cost, just disputing the statement that "optimising code usually creates unmaintainable code", "optimizing usually means messier code", "Assembly would be the best option to optimize, but it is not maintanable at all.". All statements are simply false as they are just as maintainable as any other code. More work, yes, but hardly "unmaintainable" and may just be worth the effort if the end result gives better dividends in the end.

                                Comment

                                Working...
                                X