Announcement

Collapse
No announcement yet.

Rootbeer: A High-Performance GPU Compiler For Java

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Rootbeer: A High-Performance GPU Compiler For Java

    Phoronix: Rootbeer: A High-Performance GPU Compiler For Java

    In recent months there has been an initiative underway called Rootbeer, which is a GPU compiler for Java code. Rootbeer claims to be more advanced than CUDA or OpenCL bindings for Java as it does static code analysis of the Java Bytecode and takes it automatically to the GPU...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I guess the only question I have is : will this make my Minecraft run faster on my laptop, given its AMD 5750? Because Optifine made no difference and despite my i7 processor, I'm lucky to hit 50fps on a small(ish) window.

    Fingers crossed and all that.

    Comment


    • #3
      I think Notch would have to compile Minecraft with Rootbeer, for that to work.

      If I understand the principles of Rootbeer correctly, then yes, Minecraft would very well be faster (in Theory, at least).

      Edit: Now the only question that remains for me is: Minecraft is very GPU intensive (many, many blocks to render), so wouldn't the performance of Minecraft suffer, because it would have to also execute what the cpu would normally execute?
      Last edited by bug!; 13 August 2012, 09:58 AM.

      Comment


      • #4
        Originally posted by bug! View Post
        If I understand the principles of Rootbeer correctly, then yes, Minecraft would very well be faster.
        Not at all. Minecraft would then run on the GPU, but that doesn't make it faster.

        GPUs are not faster than CPUs. They're just optimized for a different kind of workload. The trick is to keep CPU-affine workloads on the CPU, while moving GPU-affine workloads to the GPU.

        Just dumping everything on the GPU is going to end up even worse than using the CPU for rendering.


        But I don't think it would even run on the GPU. Some tasks - most importantly OS calls - need to be executed by the CPU, where the OS resides. Think file system access, audio/video-output, mouse and keyboard input, timing functions like vsync. Those are not available on your GPU.


        So while this project should make it easier to move some suitable sub-routines onto the GPU, just re-compiling everything is neither going to work, nor would it make the game faster.

        Comment


        • #5
          Firstly, GPUs are faster than CPUs, at least modern ones anyways. Utilizing more than two or three of the cores is where the real challenge comes in, but in raw performance, CPUs are no match.

          However, the one real question is can we see some benchmarks?

          Comment


          • #6
            Originally posted by scaine View Post
            I guess the only question I have is : will this make my Minecraft run faster on my laptop, given its AMD 5750? Because Optifine made no difference and despite my i7 processor, I'm lucky to hit 50fps on a small(ish) window.

            Fingers crossed and all that.
            Probalby not. Minecraft is using LWJGL, which contains an OpenGL wrapper. OpenGL is OpenGL, no matter if it's called from C or Java or whatever. Still, maybe Minecraft has some CPU bottlenecks, but executing this on the GPU won't help sind it's already doing OpenGL-Stuff.

            Rootbeer is more like a Java-based alternative for something like OpenCL. I could imagine, that Fork/Join-Stuff would be pretty cool on the GPU (You could also do your rendering that way, but OpenGL is a bit more powerful for that ^^)

            Comment


            • #7
              rootbeer only works with cuda. there is no opencl backend yet.

              Comment


              • #8
                The upside: I often prototype in java. It's probably due to the fact that I am more experienced with java than other languages. This allows me to take advantage of the GPU now, which is neat.

                The downside: After delivering a Java proof-of-concept, I am less likely to rewrite in a GPU-friendly language because I now have this feature within java. The world is now burdened with my java prototypes. MWAHAHAHAHAHA!

                F

                Comment


                • #9
                  Originally posted by coder543 View Post
                  Firstly, GPUs are faster than CPUs, at least modern ones anyways. Utilizing more than two or three of the cores is where the real challenge comes in, but in raw performance, CPUs are no match.

                  However, the one real question is can we see some benchmarks?
                  GPU aren't faster than CPU and CPU aren't faster than GPU [if you ever wanna work in HPC you need to understand this]

                  analogy/ GPUs are like warriors dumbs but with lots of brute strength and CPUs are the geek squad very smart but lack brute strength /analogy

                  so for example if you take a very parallel algorithm like MxM IDCT FLOAT and run it only once with 1 block of data the CPU is worlds faster cuz just pass the data to the GPU take longer than the entire time the cpu needed to complete the operation BUT if you have something like a video with millions of datablocks the CPU will stagnate very fast cuz been faster it don't have the brute strength and here is where the GPU shines[and the cost of loading the data in the GPU can be neglected] cuz even when every shader unit is massively slower[and dumber] than a CPU core the GPU has so many of this that allow it to work in parallel[for example 1500 shader cores can process 1500 data block per cycle while having the next 1500 waiting <-- theorically never is that good but that is the general idea]

                  this all means GPU are good when you need to crunch numbers with basic operations in massive quantities with/or very long precision types and basically thats it, if you try another thing in a GPU it will become massively slower cuz GPU are not general computing devices so they lack the hardware that is present in a CPU[branch prediction, prefetch, pipelining,etc]

                  so the CPU is as needed as the GPU and for the correct task both are extremely fast so don't believe PR crap that tesla can crunch 1teraflop and a CPU cant cuz that 1 tf is a best case scenario absoletely useless in a real life software with a very carefully optimzed dataset and to reach practically close to that speed you need to be extremely creative with your code so you can always pass optimal data[and fast enough to keep the GPU feeded 100% of the time] in the optimal configuration to the specific GPU you are working with

                  Comment


                  • #10
                    Originally posted by jrch2k8 View Post
                    GPU aren't faster than CPU and CPU aren't faster than GPU [if you ever wanna work in HPC you need to understand this]
                    Well, actually they are, unless you got a problem which can not be parallelized.

                    Originally posted by jrch2k8 View Post
                    this all means GPU are good when you need to crunch numbers with basic operations in massive quantities with/or very long precision types and basically thats it, if you try another thing in a GPU it will become massively slower cuz GPU are not general computing devices so they lack the hardware that is present in a CPU[branch prediction, prefetch, pipelining,etc]
                    They actually have most of that, i.e. pipelining.

                    Comment

                    Working...
                    X