Announcement

Collapse
No announcement yet.

The "Dirndl" On AMD Opterons Are Impressive

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by gbeauche View Post
    GCC 3.2 for AMD. And it seems that the current code state of the driver prevents it from being moved up to compile with GCC 4.x.
    I doubt that they have any real motivation to use a better compiler, though. First of all, any compiler they use would have to be compatible with the idiosyncracies of the compiler they use for building the Windows drivers, since most of the code is shared. Maybe they use GCC 3.2 because they understand the limitations and language support status of this compiler so well that they can confidently train their staff to write code against a safe C/C++ subset that works on Microsoft's compiler as well as GCC 3.2. Basically they might have some 20 - 50 page handbook or wiki site (internally, of course) documenting the do's and don'ts of the syntax and semantics of the driver. And these rules were derived from careful observation of the compilers they use across their different platforms. Changing the compiler changes the rules of the game, which could mean code rewrites may be required.

    Aside from the engineering challenge of rewriting stuff to use a more strict compiler like gcc 4.x (let alone a less popular compiler like Intel's ICC or AMD's Open64 or PathScale's EKOPath), the other question is: would it provide a measurable performance benefit?

    The answer really depends on how CPU-limited most 3d rendering is within fglrx. From the benchmarks I've seen, the general idea is that fglrx is rather heavily GPU-limited in most cases, which is what it should be. Don't get me wrong, it eats CPU intensively while rendering -- but it's not like the code is egregiously inefficient, to the point that the GPU is sitting there waiting for a command but the CPU can't chew through inefficient code fast enough to feed the GPU. If those situations were being hit, then you'd be able to measure it by plugging in a faster GPU. If a more capable GPU doesn't provide a consequent increase in performance, then it's not GPU-bound, so it must be either memory-bound or CPU-bound. But I've seen enough fglrx benchmarks on Phoronix that it's pretty clear to me that bigger card == better FPS.

    PathScale EKOPath, for its part, doesn't seem to have anything to do with a GPU; it's just a very efficient C/C++ compiler for the CPU. So if fglrx isn't CPU-bound, then increasing the efficiency of the parts of fglrx that run on the CPU is not going to result in a noticeable performance increase -- especially with less-capable graphics cards, where more than likely the CPU will sit there waiting for the GPU to finish processing, rather than the reverse.

    Still, it's good information. I find it intriguing, but not surprising, that they use GCC 3.2. And I don't think there will be a whole lot of pressure to use something different.

    The gallium3d drivers, on the other hand, tear through CPU like nobody's business. Reducing the CPU-boundedness of the open source graphics stack would be a huge win.

    Comment


    • #22
      Originally posted by allquixotic View Post
      Aside from the engineering challenge of rewriting stuff to use a more strict compiler like gcc 4.x (let alone a less popular compiler like Intel's ICC or AMD's Open64 or PathScale's EKOPath), the other question is: would it provide a measurable performance benefit?
      My point was not to provide performance but code fixes and new features to help them fix some of their bugs. Some time ago, there was a memory allocation problem for XvBA/Catalyst. A solution to avoid clashes (XvBA used the wrong allocators and then was leaking memory) was to use visibility attributes and not expose certain symbols, or use linker scripts to filter out the necessary ones. By then, another "solution" was chosen as it provided least effort. e.g. wrt. fix the code for C++ conformance and then regression testing.

      In your wiki model (of do's and don'ts), you are probably talking of a C++ subset. If that was to be used, then any compliant compiler would still compile this subset. If this is not the case, this means that subset was probably not compliant in the first place.

      PathScale EKOPath, for its part, doesn't seem to have anything to do with a GPU; it's just a very efficient C/C++ compiler for the CPU.
      Well, it probably evolved but some time ago, Pathscale was not a so efficient compiler as it was marketed. Michael's figures tend to say "yes" nowadays though. Anyway, they did a great job at making it more GCC compatible (newer C++ ABI at that time) than what the original Pro64/Open64/ORC/whatever was. I still wish ENZO is going open source instead of EKOPath. IIRC, for the latter, sources were already available to some research institutions.

      Comment


      • #23
        If it is EKOPath, I'm going to regret not using Gentoo :P

        Anyhow, the PKGBUILD for mesa will get a tweak, that's for sure ^^

        David

        Comment


        • #24
          waste of time

          so why are you wasting my time with results of "dirndl" without telling me what it is?

          Comment


          • #25
            Originally posted by Linuxhippy View Post
            so why are you wasting my time with results of "dirndl" without telling me what it is?
            You're kidding right? If not, be ashamed of your self and read the article one more time.

            Comment


            • #26
              I am genuinly excited.
              Here's hoping it makes normal software go faster too (Firefox? Kernel? JRE?)

              J.

              Comment


              • #27
                They already released some of their code under BSD licence,

                http://www.prweb.com/releases/2011/5/prweb8464380.htm

                It would be nice if this code dump would be BSD too.

                Comment


                • #28
                  Originally posted by kuse View Post
                  You're kidding right? If not, be ashamed of your self and read the article one more time.
                  I read the article several times, and still have no idea what "dirndl" is, other than some kind of magic sauce that makes stuff go faster....

                  Apparently it somehow involves GPUs; a compiler/infrastructure for running traditional CPU tasks on GPUs maybe? I dunno, because the article simply doesn't say.

                  This kind of stuff does make phoronix look bad.

                  Comment


                  • #29
                    Originally posted by snogglethorpe View Post
                    I read the article several times, and still have no idea what "dirndl" is, other than some kind of magic sauce that makes stuff go faster....

                    Apparently it somehow involves GPUs; a compiler/infrastructure for running traditional CPU tasks on GPUs maybe? I dunno, because the article simply doesn't say.

                    This kind of stuff does make phoronix look bad.
                    Then apparently you didn't read the article, i'll help you so you don't look so stupid here on the forum:

                    Those that follow my Twitter feed know a big software announcement is pending after being set back multiple times over the past week. Here's one graph illustrating the real-world impact of this yet-to-be-announced open-source move for open operating systems.
                    In the graph below, "Dirndl" is the codename for this new project that we shall use until the official announcement is made, as the results are just so irresistible. The Ubuntu 11.04 result is the value of a stock Ubuntu Natty installation.

                    Comment


                    • #30
                      Originally posted by snogglethorpe View Post
                      Apparently it somehow involves GPUs
                      Where in the article does it say 'GPU'? Which test is testing the GPU?

                      Comment

                      Working...
                      X