AMD Developing Next-Gen Fortran Compiler Based On Flang, Optimized For AMD GPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Setif
    replied
    Originally posted by Mathias View Post
    I wondered about old code not usually being targeted at being executed on 15000 "stream processors". Especially since copying data to the GPU and back has a huge synchronization overhead that quickly limits the amount of cores you can use.

    But the Blog post mentions the MI300A, which is an APU, (24 Zen4 Cores, 228 CUs with ~15k stream processors, 128GB unified HBM3 RAM). So the base code can run on the CPU cores and the OpenMP threads get executed on the GPU part with zero-copy and little overhead. That architecture sounds a lot better suited for legacy code (new code as well). With some tweaks this might get old code up to speed on these monster chips.
    Many Fortran codes already run in clusters with thousands of CPUs, go learn about MPI (It was used since nineties). Fortran is one of few languages that have native support for vectorization.
    OpenMP started adding support for accelerators (GPUs) since 4.5 and they added directives to copy data to and from GPU.
    There are also OpenACC (similar to OpenMP) anf CUDA-Fortran (which is an nividia extension to run cuda code).
    Of course there are guides to how to make code run effectively on GPUs, but that also apply to C/C++.
    Fortran doesn't mean "old code" there are new standards of the language and the latest is Fortran-2023 and people still write their code in it.

    Leave a comment:


  • L_A_G
    replied
    I kind of had to do a double take when I read the title, but it does make sense. University mathematics and engineering, especially control engineering departments still use and actively maintain a lot of legacy Fortran code. On account of a lot of very old professors, said legacy code and the language being written first and foremost for them. However I've never seen it actively used outside of legacy applications once I graduated. Universities are clearly to legacy languages what petting zoos are to retired race horses (who'd otherwise be turned into glue and/or hotdog meat).

    Never had to write any myself, but I've worked with Matlab/Octave which are largely written in it and have many of the same... "Characteristics"... Differences like row vs column major order against every single other programming and scripting language I've ever used genuinely made me feel like throwing a CRT monitor thru a window.

    Originally posted by pokeballs View Post
    But why would I ever want to port my code to this joke?

    You do realize this is part of ROC and there to help people easily port their Fortran code to it?

    Leave a comment:


  • pokeballs
    replied
    They should stop wasting time and focus on making ROCm actually enticing.
    CUDA works and is officially supported on every single NVIDIA GPU. It offers lots of built-in goodies to make your life easier.
    Until AMD realize ROCm is inferior to every other solutions, they'll continue with their HIP to "help you port you code"...
    But why would I ever want to port my code to this joke?

    Leave a comment:


  • sophisticles
    replied
    Originally posted by Mathias View Post
    I wondered about old code not usually being targeted at being executed on 15000 "stream processors". Especially since copying data to the GPU and back has a huge synchronization overhead that quickly limits the amount of cores you can use.

    But the Blog post mentions the MI300A, which is an APU, (24 Zen4 Cores, 228 CUs with ~15k stream processors, 128GB unified HBM3 RAM). So the base code can run on the CPU cores and the OpenMP threads get executed on the GPU part with zero-copy and little overhead. That architecture sounds a lot better suited for legacy code (new code as well). With some tweaks this might get old code up to speed on these monster chips.
    NVIDIA's cards have been able to directly access data on disk for a while now:

    As AI and HPC datasets continue to increase in size, the time spent loading data for a given application begins to place a strain on the total application’s performance. When considering end-to-end…

    Leave a comment:


  • sophisticles
    replied
    Originally posted by Old Grouch View Post
    There's a huge pile of legacy programs used in scientific and technical computing that are written in Fortran. The language isn't going away any time soon.

    Being able to run Fortran on 'the desktop' on a GPU is useful. E.g.

    This post is the first in a series on CUDA Fortran, which is the Fortran interface to the CUDA parallel computing platform. If you are familiar with CUDA C, then you are already well on your way to…


    But compiler validation is important. I wasted half a year trying to get the same source code to give the same results when compiled by two different compilers on different machines/cpu types. It was a standard source distributed to dozens, if not hundreds of institutions; and the compiler on the new machine had, apparently, passed all the (then) validation tests. It still gave completely kooky results, and I ended up having to borrow time on somebody else's computer (a large petrochemical/pharmacutical company helped me out), where the test runs just worked, and I got usable results in about a month.
    FORTRAN also runs calculations faster than just about every other language.

    As a hobby I write code in various languages for calculating various things, for instance calculating Pi using Leibniz's method, verifying Fermat's Last Theorem, and most recently modeling the orbits of the planets to see how often all planets align.

    The fastest language I tested was Go, but the output was unusable, the results were wrong and I could not find anything in the docs to explain it.

    C# was the slowest of the compiled languages, which is understandable considering it relies on .NET, C was very fast, as was Pascal, but FORTRAN was by far the fastest that gave accurate results.

    Python when NumPy approaches C speeds and if the tasks can benefit form the JIT decorator, it can almost tie C, but nothing beats FORTRAN when extreme accuracy is needed.

    Leave a comment:


  • DanL
    replied
    Originally posted by DarkCloud View Post
    Lets hope AMD doesn't screw the pooch here and only offer the compiler on a very limited number of GPU's
    If it requires ROCm, then it's already screwed in that regard.

    Leave a comment:


  • DarkCloud
    replied
    This is a very savvy move. OpenCL doesn't seem to be getting much traction at least when compared to CUDA. There are a lot of programs written in FORTRAN that potentially be ported to this new FORTRAN. As far as new users, FORTRAN is probable easier to learn then OpenCL. Lets hope AMD doesn't screw the pooch here and only offer the compiler on a very limited number of GPU's

    Leave a comment:


  • cl333r
    replied
    Originally posted by Old Grouch View Post

    To answer your first question, almost certainly, yes. Python has taken over for the most part general purpose scientific/technical computing. COBOL is also slowly fading away - I don't think anyone would specify COBOL for a new application these days. Fortran is still a valid choice for big numerical simulations - read a data file, do some number crunching, write results file. The standardised, validated, battle-tested library routines are what have value, and something better will come along eventually.
    Will Fortran last for 150 years? No idea, but it would not surprise me if it were still in use. I'd be less sure about the staying power of Java, for example.
    Java is a bloated mess that failed to lose weight, it will die sooner because it failed to evolve. It already died on the desktop, some apps like Eclipse still exist, but otherwise it's dead and only present on corporate desktops where a lot of "gray boxes inside gray boxes"-style apps were written back in the day. I'm toying with Zig which seems very promising.

    Leave a comment:


  • Old Grouch
    replied
    Originally posted by cl333r View Post

    But everything that has a beginning has also an end. Isn't Fortran slowly faiding away? Are these corporations and scientists planning to run old fortran code for the next 150 years?
    To answer your first question, almost certainly, yes. Python has taken over for the most part general purpose scientific/technical computing. COBOL is also slowly fading away - I don't think anyone would specify COBOL for a new application these days. Fortran is still a valid choice for big numerical simulations - read a data file, do some number crunching, write results file. The standardised, validated, battle-tested library routines are what have value, and something better will come along eventually.
    Will Fortran last for 150 years? No idea, but it would not surprise me if it were still in use. I'd be less sure about the staying power of Java, for example.

    Leave a comment:


  • cl333r
    replied
    Originally posted by Old Grouch View Post
    There's a huge pile of legacy programs used in scientific and technical computing that are written in Fortran. The language isn't going away any time soon.

    Being able to run Fortran on 'the desktop' on a GPU is useful. E.g.
    But everything that has a beginning has also an end. Isn't Fortran slowly faiding away? Are these corporations and scientists planning to run old fortran code for the next 150 years?

    Leave a comment:

Working...
X