Announcement

Collapse
No announcement yet.

AMD Developing Next-Gen Fortran Compiler Based On Flang, Optimized For AMD GPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Developing Next-Gen Fortran Compiler Based On Flang, Optimized For AMD GPUs

    Phoronix: AMD Developing Next-Gen Fortran Compiler Based On Flang, Optimized For AMD GPUs

    AMD today went public with details on the "AMD Next-Gen Fortran Compiler" as a new Fortran compiler they are working on based on LLVM's Flang...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    It's funny, back in the 80's when i was in high school, I took a class on FORTRAN and back then my teacher said that FORTRAN was a dying language and he didn't understand why they were teaching it.

    He said the same thing about COBOL.

    40 years later and they are still going strong.

    I don't get AMD at all, on the one hand they seem to be all in on pure CPU performance via more cores at the expense of their GPU business.

    On the other hand they have invested nearly a billion dollars buying up other companies that focus on AI.

    Today there was a story that AMD is cutting 4% of it's global work force to focus on AI in the hopes of swimming across NVIDIA's moat.

    I can't help but think that AMD is barking up the wrong tree.

    Comment


    • #3
      There's a huge pile of legacy programs used in scientific and technical computing that are written in Fortran. The language isn't going away any time soon.

      Being able to run Fortran on 'the desktop' on a GPU is useful. E.g.

      This post is the first in a series on CUDA Fortran, which is the Fortran interface to the CUDA parallel computing platform. If you are familiar with CUDA C, then you are already well on your way to…


      But compiler validation is important. I wasted half a year trying to get the same source code to give the same results when compiled by two different compilers on different machines/cpu types. It was a standard source distributed to dozens, if not hundreds of institutions; and the compiler on the new machine had, apparently, passed all the (then) validation tests. It still gave completely kooky results, and I ended up having to borrow time on somebody else's computer (a large petrochemical/pharmacutical company helped me out), where the test runs just worked, and I got usable results in about a month.

      Comment


      • #4
        I wondered about old code not usually being targeted at being executed on 15000 "stream processors". Especially since copying data to the GPU and back has a huge synchronization overhead that quickly limits the amount of cores you can use.

        But the Blog post mentions the MI300A, which is an APU, (24 Zen4 Cores, 228 CUs with ~15k stream processors, 128GB unified HBM3 RAM). So the base code can run on the CPU cores and the OpenMP threads get executed on the GPU part with zero-copy and little overhead. That architecture sounds a lot better suited for legacy code (new code as well). With some tweaks this might get old code up to speed on these monster chips.

        Comment


        • #5
          Originally posted by Old Grouch View Post
          There's a huge pile of legacy programs used in scientific and technical computing that are written in Fortran. The language isn't going away any time soon.

          Being able to run Fortran on 'the desktop' on a GPU is useful. E.g.
          But everything that has a beginning has also an end. Isn't Fortran slowly faiding away? Are these corporations and scientists planning to run old fortran code for the next 150 years?

          Comment


          • #6
            Originally posted by cl333r View Post

            But everything that has a beginning has also an end. Isn't Fortran slowly faiding away? Are these corporations and scientists planning to run old fortran code for the next 150 years?
            To answer your first question, almost certainly, yes. Python has taken over for the most part general purpose scientific/technical computing. COBOL is also slowly fading away - I don't think anyone would specify COBOL for a new application these days. Fortran is still a valid choice for big numerical simulations - read a data file, do some number crunching, write results file. The standardised, validated, battle-tested library routines are what have value, and something better will come along eventually.
            Will Fortran last for 150 years? No idea, but it would not surprise me if it were still in use. I'd be less sure about the staying power of Java, for example.

            Comment


            • #7
              Originally posted by Old Grouch View Post

              To answer your first question, almost certainly, yes. Python has taken over for the most part general purpose scientific/technical computing. COBOL is also slowly fading away - I don't think anyone would specify COBOL for a new application these days. Fortran is still a valid choice for big numerical simulations - read a data file, do some number crunching, write results file. The standardised, validated, battle-tested library routines are what have value, and something better will come along eventually.
              Will Fortran last for 150 years? No idea, but it would not surprise me if it were still in use. I'd be less sure about the staying power of Java, for example.
              Java is a bloated mess that failed to lose weight, it will die sooner because it failed to evolve. It already died on the desktop, some apps like Eclipse still exist, but otherwise it's dead and only present on corporate desktops where a lot of "gray boxes inside gray boxes"-style apps were written back in the day. I'm toying with Zig which seems very promising.

              Comment


              • #8
                This is a very savvy move. OpenCL doesn't seem to be getting much traction at least when compared to CUDA. There are a lot of programs written in FORTRAN that potentially be ported to this new FORTRAN. As far as new users, FORTRAN is probable easier to learn then OpenCL. Lets hope AMD doesn't screw the pooch here and only offer the compiler on a very limited number of GPU's

                Comment


                • #9
                  Originally posted by DarkCloud View Post
                  Lets hope AMD doesn't screw the pooch here and only offer the compiler on a very limited number of GPU's
                  If it requires ROCm, then it's already screwed in that regard.

                  Comment


                  • #10
                    Originally posted by Old Grouch View Post
                    There's a huge pile of legacy programs used in scientific and technical computing that are written in Fortran. The language isn't going away any time soon.

                    Being able to run Fortran on 'the desktop' on a GPU is useful. E.g.

                    This post is the first in a series on CUDA Fortran, which is the Fortran interface to the CUDA parallel computing platform. If you are familiar with CUDA C, then you are already well on your way to…


                    But compiler validation is important. I wasted half a year trying to get the same source code to give the same results when compiled by two different compilers on different machines/cpu types. It was a standard source distributed to dozens, if not hundreds of institutions; and the compiler on the new machine had, apparently, passed all the (then) validation tests. It still gave completely kooky results, and I ended up having to borrow time on somebody else's computer (a large petrochemical/pharmacutical company helped me out), where the test runs just worked, and I got usable results in about a month.
                    FORTRAN also runs calculations faster than just about every other language.

                    As a hobby I write code in various languages for calculating various things, for instance calculating Pi using Leibniz's method, verifying Fermat's Last Theorem, and most recently modeling the orbits of the planets to see how often all planets align.

                    The fastest language I tested was Go, but the output was unusable, the results were wrong and I could not find anything in the docs to explain it.

                    C# was the slowest of the compiled languages, which is understandable considering it relies on .NET, C was very fast, as was Pascal, but FORTRAN was by far the fastest that gave accurate results.

                    Python when NumPy approaches C speeds and if the tasks can benefit form the JIT decorator, it can almost tie C, but nothing beats FORTRAN when extreme accuracy is needed.

                    Comment

                    Working...
                    X