Announcement

Collapse
No announcement yet.

TornadoVM 0.15 Released - Now Supports Running Java On Intel Arc Graphics

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • TornadoVM 0.15 Released - Now Supports Running Java On Intel Arc Graphics

    Phoronix: TornadoVM 0.15 Released - Now Supports Running Java On Intel Arc Graphics

    TornadoVM is an open-source plugin for OpenJDK and GraalVM that allows for running Java programs on heterogeneous hardware like GPUs and FPGAs. With today's TornadoVM 0.15, it's the first release now supporting discrete Intel Arc Graphics hardware...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Remeinds me of https://github.com/bsletten/rootbeer1.

    Yes yes yes, if you want serious metal performance, you will not use Java. But the ability to seamlessly integrate code that runs on the GPU is seriously cool!

    Please note: There are quite a few limitations for the code to run on the GPU, you cannot throw *any* old Java application onto it. But it is still fun. Thanks for all contributers!

    Comment


    • #3
      Kinda reminds me of: Linux in a Pixel Shader - A RISC-V Emulator for VRChat

      There are things that can be done, but not necessarily should :P From a technical perspective, it's amazing of course.

      Comment


      • #4
        Java everywhere! The dream is still alive.

        Comment


        • #5
          Originally posted by Draget View Post
          Remeinds me of https://github.com/bsletten/rootbeer1.

          Yes yes yes, if you want serious metal performance, you will not use Java. But the ability to seamlessly integrate code that runs on the GPU is seriously cool!

          Please note: There are quite a few limitations for the code to run on the GPU, you cannot throw *any* old Java application onto it. But it is still fun. Thanks for all contributers!
          This is actually pretty exciting, even for the "serious metal performance" question.

          Its one thing to get a block of code running really well on alt hardware, but such endeavours are only rarely worth the development overhead (e.g. very niche compute like crypto) vs the risk of not really seeing that much in gains (e.g. when it turns out the bottlenecks were in networking or synchronisation).

          being able to take existing classes that are "possibly" suited to GPU compute and getting a quick answer as to whether they actually are, and also balance at run time where it runs will make a huge difference to efficiently using all the hardware available.

          I need to get round to taking a few classes for a spin on this rtx3070.

          Comment


          • #6
            Finally, massively parallel remote code execution. I'm gonna rev up my bitcoin miners.

            Comment


            • #7
              Interesting. 10 years ago with the advent of AMD's Kaveri APU and later with Carrizo and Bristol Ridge (which I still have two Bristol Ridge based systems) AMD and Oracle were talking up taking advantage of AMD's HSA hUMA architecture to allow Java to run on GPUs, DSPs and FPGAs first with Project Aparapi (generating OpenCL code to run on GPUs) and later Project Sumatra (generating HSAIL code to run on GPUs, DSPs, etc,). And all of it died with the entrance of Lisa Su and her dismantling the Bulldozer line of APUs, and HSA and HSAIL and rebuilding AMD from the ground up with the Zen architecture.

              Not saying it has been a bad thing. AMD is destroying Intel right now, like, literally as evidenced with Intel's latest catastrophic earnings report. But we did lose true unified memory bit addressing between CPU and GPU (I believe it was 48 bit unified memory addressing between CPU and GPU with Carrizo and Bristol Ridge). And the dream of having a true heterogenous environment for things such as Java.

              So is TornadoVM another attempt at what Project Aparapi and Sumatra from Oracle tried ten years ago? And is anyone on TornadoVM involved on either Project Aparapi or Sumatra?

              Comment


              • #8
                TornadoVM fills the gap between heterogeneous hardware and high-level programming languages (like Java). It is a parallel programming framework for Java Virtual Machine (JVM) languages that can transparently and dynamically offload Java bytecodes into OpenCL, and then execute the code on heterogeneous hardware. TornadoVM also integrates an optimizing runtime that can repeatedly utilize device buffers, saving data transfers across devices. It performs live task migration across computing devices. TornadoVM Code In Action


                If you want to start working on TornadoVM, matrix multiplication is a good place to start. The code shown below is the simple matrix multiplication program in Java:

                class Calculate
                {
                public static void findMatrixMultiplication(final int[] arrA, final int[] arrB, final int[] arrC, final int size)
                {
                for (int i = 0; i < size; i++)
                {
                for (int j = 0; j < size; j++)
                {
                float sum = 0.0f;
                for (int k = 0; k < size; k++)
                sum += A[(i * size) + k] * B[(k * size) + j];
                C[(i * size) + j] = sum;
                }
                }
                }
                }

                To accelerate this code performance with TornadoVM, we have to first annotate the loops in the code that can be parallelized. Here, we can completely parallelize the two outermost loops, as there are no dependencies between the iterations.

                We use TornadoVM annotations @Parallel for annotations.

                The following illustration shows the code after applying TornadoVM:

                class Calculate
                {
                public static void findMatrixMultiplication(final int[] arrA, final int[] arrB, final int[] arrC, final int size)
                {
                for (@Parallel int i = 0; i < size; i++)
                {
                for (@Parallel int j = 0; j < size; j++)
                {
                int sum = 0;
                for (int k = 0; k < size; k++)
                sum += A[(i * size) + k] * B[(k * size) + j];
                C[(i * size) + j] = sum;
                }
                }
                }
                }

                The @Parallel annotation is like a hint by the TornadoVM JIT Compiler.

                The TornadoVM JIT Compiler does not force parallelization. Instead, it first checks for the possibility of annotated loops to be parallelized. It then replaces the for-loops for the equivalent parallel indexing in OpenCL. If the for-loops are not independent and cannot be parallelized, TornadoVM exits the code and the code runs in a sequential manner.

                Comment


                • #9
                  Originally posted by Jumbotron View Post
                  , matrix multiplication is a good place to start.
                  great thank you, markov matrices is exactly the kind of thing I was thinking this could be useful.

                  couple of questions, whats it like for debugging/is it eclipse friendly.

                  Also, rather rather than Parallel annotations, is there something akin to the mclapply function in R, where you can give it a list of objects and a function to run in parallel over them, and if not directly how would you write such a function using the annotation?

                  Is there some list of java functions/objects that are and are not available inside the gpu (Math.random() springs to mind)
                  Last edited by mSparks; 29 January 2023, 01:24 AM.

                  Comment

                  Working...
                  X