AMD Developing Next-Gen Fortran Compiler Based On Flang, Optimized For AMD GPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • sophisticles
    Senior Member
    • Dec 2015
    • 2547

    #11
    Originally posted by Mathias View Post
    I wondered about old code not usually being targeted at being executed on 15000 "stream processors". Especially since copying data to the GPU and back has a huge synchronization overhead that quickly limits the amount of cores you can use.

    But the Blog post mentions the MI300A, which is an APU, (24 Zen4 Cores, 228 CUs with ~15k stream processors, 128GB unified HBM3 RAM). So the base code can run on the CPU cores and the OpenMP threads get executed on the GPU part with zero-copy and little overhead. That architecture sounds a lot better suited for legacy code (new code as well). With some tweaks this might get old code up to speed on these monster chips.
    NVIDIA's cards have been able to directly access data on disk for a while now:

    As AI and HPC datasets continue to increase in size, the time spent loading data for a given application begins to place a strain on the total application’s performance. When considering end-to-end…

    Comment

    • pokeballs
      Junior Member
      • Sep 2024
      • 27

      #12
      They should stop wasting time and focus on making ROCm actually enticing.
      CUDA works and is officially supported on every single NVIDIA GPU. It offers lots of built-in goodies to make your life easier.
      Until AMD realize ROCm is inferior to every other solutions, they'll continue with their HIP to "help you port you code"...
      But why would I ever want to port my code to this joke?

      Comment

      • L_A_G
        Senior Member
        • Oct 2015
        • 1609

        #13
        I kind of had to do a double take when I read the title, but it does make sense. University mathematics and engineering, especially control engineering departments still use and actively maintain a lot of legacy Fortran code. On account of a lot of very old professors, said legacy code and the language being written first and foremost for them. However I've never seen it actively used outside of legacy applications once I graduated. Universities are clearly to legacy languages what petting zoos are to retired race horses (who'd otherwise be turned into glue and/or hotdog meat).

        Never had to write any myself, but I've worked with Matlab/Octave which are largely written in it and have many of the same... "Characteristics"... Differences like row vs column major order against every single other programming and scripting language I've ever used genuinely made me feel like throwing a CRT monitor thru a window.

        Originally posted by pokeballs View Post
        But why would I ever want to port my code to this joke?

        You do realize this is part of ROC and there to help people easily port their Fortran code to it?
        "Why should I want to make anything up? Life's bad enough as it is without wanting to invent any more of it."

        Comment

        • Setif
          Senior Member
          • Feb 2016
          • 301

          #14
          Originally posted by Mathias View Post
          I wondered about old code not usually being targeted at being executed on 15000 "stream processors". Especially since copying data to the GPU and back has a huge synchronization overhead that quickly limits the amount of cores you can use.

          But the Blog post mentions the MI300A, which is an APU, (24 Zen4 Cores, 228 CUs with ~15k stream processors, 128GB unified HBM3 RAM). So the base code can run on the CPU cores and the OpenMP threads get executed on the GPU part with zero-copy and little overhead. That architecture sounds a lot better suited for legacy code (new code as well). With some tweaks this might get old code up to speed on these monster chips.
          Many Fortran codes already run in clusters with thousands of CPUs, go learn about MPI (It was used since nineties). Fortran is one of few languages that have native support for vectorization.
          OpenMP started adding support for accelerators (GPUs) since 4.5 and they added directives to copy data to and from GPU.
          There are also OpenACC (similar to OpenMP) anf CUDA-Fortran (which is an nividia extension to run cuda code).
          Of course there are guides to how to make code run effectively on GPUs, but that also apply to C/C++.
          Fortran doesn't mean "old code" there are new standards of the language and the latest is Fortran-2023 and people still write their code in it.

          Comment

          • AdrianBc
            Senior Member
            • Nov 2015
            • 292

            #15
            Originally posted by cl333r View Post

            But everything that has a beginning has also an end. Isn't Fortran slowly faiding away? Are these corporations and scientists planning to run old fortran code for the next 150 years?


            C and all languages derived from it, not only C++, but also those that have evolved farther apart, like Java or Rust, have much poorer support for arrays that even the first proposal for Fortran from 1954, i.e. 70 years ago.

            Modern Fortran includes most features that can be found in any other modern programming language, while having much better support for operations that handle arrays.

            In theory, any programming language that has means for defining abstract data types and for overloading operators and functions, for instance C++, could be used to write some libraries that would eventually provide the same convenience for writing programs for scientific/technical computing like Fortran.

            In practice, I am not aware of any open source C++ libraries that would be good enough.

            Other language extensions, like Python with NumPy, are much uglier and less convenient than Fortran and than what could be implemented in C++, besides being extremely slow for anything that does not consist mostly in invoking library functions written in other languages than Python.

            For writing programs for GPUs, Fortran remains an excellent choice. Used wisely, for programs where most of the work consists in array operations, Fortran allows writing programs that are simpler, smaller and easier to read than any other modern choices.

            So no, Fortran is not at all obsolete, because it does not have any good replacement. Obviously, it is good only for a delimited domain of applications and it would be completely unsuitable for other applications, like writing a text editor or a Web browser.







            Comment

            • onlyLinuxLuvUBack
              Senior Member
              • May 2019
              • 665

              #16
              Originally posted by pokeballs View Post
              They should stop wasting time and focus on making ROCm actually enticing.
              CUDA works and is officially supported on every single NVIDIA GPU. It offers lots of built-in goodies to make your life easier.
              Until AMD realize ROCm is inferior to every other solutions, they'll continue with their HIP to "help you port you code"...
              But why would I ever want to port my code to this joke?
              add systemd and youll be loving it: "systemd_fortran_amd"

              Comment

              • LtdJorge
                Senior Member
                • Sep 2020
                • 187

                #17
                Originally posted by Old Grouch View Post

                To answer your first question, almost certainly, yes. Python has taken over for the most part general purpose scientific/technical computing. COBOL is also slowly fading away - I don't think anyone would specify COBOL for a new application these days. Fortran is still a valid choice for big numerical simulations - read a data file, do some number crunching, write results file. The standardised, validated, battle-tested library routines are what have value, and something better will come along eventually.
                Will Fortran last for 150 years? No idea, but it would not surprise me if it were still in use. I'd be less sure about the staying power of Java, for example.
                The Python libraries are written in C, C++, Fortran or, lately, Rust underneath, tho.

                Comment

                • Old Grouch
                  Senior Member
                  • Apr 2020
                  • 675

                  #18
                  Originally posted by LtdJorge View Post

                  The Python libraries are written in C, C++, Fortran or, lately, Rust underneath, tho.
                  As I said "The standardised, validated, battle-tested library routines are what have value". Other commentators have pointed out why Fortran can still be a valid choice in certain applications. It's not ideal for everything, but in particular niches, it probably cannot easily be beaten.

                  Comment

                  • Old Grouch
                    Senior Member
                    • Apr 2020
                    • 675

                    #19
                    Originally posted by L_A_G View Post
                    Never had to write any myself, but I've worked with Matlab/Octave which are largely written in it and have many of the same... "Characteristics"... Differences like row vs column major order against every single other programming and scripting language I've ever used genuinely made me feel like throwing a CRT monitor thru a window.
                    I have great sympathy for your difficulties, but from the other side of the fence. It is 'just' a convention, but when you are used to it, trying to use it in the opposite sense generates bugs and mental contortions one could do without.

                    Comment

                    • rickst29
                      Junior Member
                      • Feb 2013
                      • 45

                      #20
                      Python (compiled) can be pretty fast when the mathematical work is done within well-optimized libraries (such as numpy).

                      But Fortran has a few a few inherent "advantages" in working with large arrays. In applications such as weather forecasting, these are typically multi-dimension arrays. Fortran, by default, does not alias those arrays, and the syntax supports very complex math involving multiple arrays at once (e.g. "a / b+ c / d = e" where ALL of the names refer to N-dimension arrays.) A smart compiler, typically used with a few "hints" provided by the programmer in specially formatted comments, can utilize all kinds of hardware features for super-fast pipelining of complex math on large arrays.

                      In my much younger days (I have been retired for may years) I worked for Cray Research - specifically on testing and verification of their compilers. The first compiler at Cray Research was for Fortran (not C), and it was built to generate object code for the 'vectorized' hardware of their early "Super Computers". (It did utilize "hinting comments", in which the programmer could promise that hat "bounds checking will not be needed here", or "this loop iterator will never go out of bounds", and so on).

                      Python is easier to learn and easier to write for general purposes. Fortran error handling is awkward, I/O routines are less friendly, and it has no model to isolate "objects" and manage memory being used for their instances. But complex math on non-aliased arrays in Python is a bit tougher to accomplish in Python, and the Python-based runtime will almost always be much slower than the same "job" optimized by a smart Fortran compiler and its compatible libraries.

                      I expect Python to dominate most new projects, but Fortran will remain useful when the speed of the math processing is critical. In some cases, the difference in execution speed (within the mathematical processing can still reach 10-100x faster with Fortan-compiled code.

                      But Python is much easier to learn. Error handling is awkward, I/O routines are unfriendly, and it is has no model to isolate objects and manage memory for those objects. Python is far easier to learn, and has a great future in heavy duty computing. But existing Fortran code can be further enhanced by persons willing to learn the language, and the code base is large -- weather forecasting models, simulated "mining" of oil fields and other minerals, simulating the the behavior of modeled "objects" (such as aircraft and cars) within a fluid, and the fluid dynamics themselves are all things which I worked with in distant past (even before 1990). Many newer areas of biology, chemistry, and are also using Fortran to build models and simulations .
                      Last edited by rickst29; 16 November 2024, 03:19 PM.

                      Comment

                      Working...
                      X