Announcement

Collapse
No announcement yet.

The Interesting Tale Of AMD's FirePro Drivers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Michael View Post
    It would still stay the same width since the display driver differs too. But if those two rows were removed, it would compact as much as possible.
    Oh, that's interesting. And good!

    On the chart shown, the issue is that you have to scan your eyes way across the page to get to the correct information. Perhaps an improvement could be to left-justify the text in certain circumstances (like > 3 columns wide). I'm just not sure if that would look good or not without seeing it.

    Comment


    • #22
      It's interesting to note that the width of the table would only be an issue in articles where you are testing a massive number of variations of the independent variable(s). If you're just testing Ubuntu 10.10 vs Arch Linux, that's only two variables and it would look really nice. If you're testing two years of monthly releases, you get a lot of wasted space.

      One possible optimization that could be generalized in the graph-generation logic is, if the number of variable columns is greater than a certain threshold, then don't include those variables as rows, but rather, list them in a separate table. I define a "variable column" as any data which changes from one column to the next. For example, the CPU used in this article has exactly one variable column: for all test profiles used, it's the same CPU. The massively variable rows in this table are the OpenGL version and the Catalyst driver version. Instead of making the whole table artificially wide, you could create another two-column table listing the OpenGL and Catalyst versions used as independent variables in each test. I hope I explained that clearly enough.

      Comment


      • #23
        Another way to state what I just wrote (damn edit limit) is to say that, it is possible to distinguish between static data and variable data.

        Static data remains constant despite changes in other variables, and indeed it has to be the same for every test. Usually, your hardware will be static data, although sometimes you will have two hardware columns if you're comparing the performance of different hardware.

        Variable data is absolutely mandatory for any test to be meaningful, based on what I've seen. In other words, if you have no varying data, then if you ran e.g. OpenArena FPS test against your configuration, all you could say is "Yeah, that FPS number looks pretty good". To have some meat to analyze, you must have variables.

        The problem is when you have a low number of variable fields (in this case, just the Catalyst Driver version and OpenGL version string), but a high number of variations of one or more of those fields. It would be great if the software could detect this case and break out the highly-variable data into a separate table.

        I'm a programmer, so I kind of have to convince myself that this would even be possible to generalize before I try and suggest it to you, but on the other hand I am sure there are people out there smarter than me who have already done something like this effectively. I already envision complications if you are trying a full factorial of multiple variables against each other. A simple example is the 2x2 case: test variable X and Y in a boolean breakdown of {X=1,Y=0}, {X=1,Y=1}, {X=0,Y=0}, {X=0,Y=1}. So if you have an NGV PowerOn DH75999+ and you want to compare the performance between the NGV Initiator Linux Drivers from early 2011 and mid-2011, but you also want to compare performance with Smear-Free toggled on and off, you would have to describe this 2x2 variation succinctly in a table. Hmm.

        Sounds like the easiest way to divide up the tables is to just have two tables regardless: one table containing static data, and one table containing variable data. This is only really necessary for readability purposes if you have a large number of variables (or combinations of variables), but it wouldn't hurt to do it this way all the time.

        Comment


        • #24
          Originally posted by deanjo View Post
          That is the biggest crock of crap I have read. A talented and competent programmer can make optimized and readable code. Of course the person reading the code has to have at least the understanding of what the optimizations do for it to make sense.
          In an ideal world maybe. In real world optimizing usually means messier code. Assembly would be the best option to optimize, but it is not maintanable at all.
          For example, maintanable code means that you need to respect the variables scope, and pass the variables you need everytime you call a method or a procedure, but using global variables would be much faster but also less maintanable.

          Comment


          • #25
            Originally posted by blackshard View Post
            In an ideal world maybe. In real world optimizing usually means messier code. Assembly would be the best option to optimize, but it is not maintanable at all.
            For example, maintanable code means that you need to respect the variables scope, and pass the variables you need everytime you call a method or a procedure, but using global variables would be much faster but also less maintanable.
            Haha, I have first-hand experience with global variable abuse. Old, hacked-together code for a 90s Windows game. A global variable string that basically gets overwritten hundreds or thousands of times per minute with any resources that are loaded from the string table, any debug messages, and any user-visible string rendered to the UI. Absolutely minimal memory usage, no heap allocation thrashing, but a clusterfsck of unmaintainable code. It'd be fine if they stuck to the structured programming paradigm of only reading the datum within the same scope as it was written to, but unfortunately there are hundreds of places in the code where the datum is written to, and assumptions about the code paths are made. You see functions all over the place reading the global variable, and you have no idea which function was supposed to have set that global variable at some point earlier in the execution path. Half the time the function that did the write is no longer on the call stack. Fun...

            Comment


            • #26
              Originally posted by blackshard View Post
              In an ideal world maybe. In real world optimizing usually means messier code. Assembly would be the best option to optimize, but it is not maintanable at all.
              For example, maintanable code means that you need to respect the variables scope, and pass the variables you need everytime you call a method or a procedure, but using global variables would be much faster but also less maintanable.
              I'm not denying that it can and often happens. That is at the programmers doing, but it isn't necessary to write optimized code. Even assembly allows comments.

              Comment


              • #27
                Originally posted by deanjo View Post
                I'm not denying that it can and often happens. That is at the programmers doing, but it isn't necessary to write optimized code. Even assembly allows comments.
                Comments in assembly code won't make it more maintanable. Putting your hand on asm code after 1 year you don't watch it will require much more time and money than putting the hands on high level language in the same conditions.

                Object oriented paradigm has been invented for maintanability, but it came with the expense of some speed.

                Comment


                • #28
                  Originally posted by blackshard View Post
                  Comments in assembly code won't make it more maintanable. Putting your hand on asm code after 1 year you don't watch it will require much more time and money than putting the hands on high level language in the same conditions.
                  Bullocks, I've been maintaining assembly for years. If the optimization is worth it is debatable as that is dependant on the programmer doing the maintaining. Nothing you have said has proven that "optimising code usually creates unmaintainable code"

                  optimized != unmaintainable

                  Comment


                  • #29
                    Originally posted by deanjo View Post
                    Bullocks, I've been maintaining assembly for years. If the optimization is worth it is debatable as that is dependant on the programmer doing the maintaining. Nothing you have said has proven that "optimising code usually creates unmaintainable code"

                    optimized != unmaintainable
                    Happy for you, but a statistic made of a single element is not a valid statistic.
                    You said that the maintenance depends upon the programmer doing that, and that's true, but in general debugging and maintaining asm code is costly (in terms of time and money) than maintaining C code, and maintaining C code is costly than maintaining Java/C#/(put here your preferred OO language).
                    I never said that asm code is impossibile to maintain (Did I?), I just said that it's more difficult to do, and costs more.
                    Also, asm code was just an example, as the global scope variables was.

                    Comment


                    • #30
                      Originally posted by blackshard View Post
                      Happy for you, but a statistic made of a single element is not a valid statistic.
                      You said that the maintenance depends upon the programmer doing that, and that's true, but in general debugging and maintaining asm code is costly (in terms of time and money) than maintaining C code, and maintaining C code is costly than maintaining Java/C#/(put here your preferred OO language).
                      I never said that asm code is impossibile to maintain (Did I?), I just said that it's more difficult to do, and costs more.
                      Also, asm code was just an example, as the global scope variables was.
                      Nobody was arguing the cost, just disputing the statement that "optimising code usually creates unmaintainable code", "optimizing usually means messier code", "Assembly would be the best option to optimize, but it is not maintanable at all.". All statements are simply false as they are just as maintainable as any other code. More work, yes, but hardly "unmaintainable" and may just be worth the effort if the end result gives better dividends in the end.

                      Comment

                      Working...
                      X