Announcement

Collapse
No announcement yet.

Work Continues On WebAssembly For Low-Level, In-Browser Computing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by mdias View Post
    There's a reason you can't compile C-like languages into java bytecode. Lacking the concept of pointers and unsigned data types might just be a hint that java bytecode is not a good alternative to WebAssembly.
    The unsigned types are different only on very high level, before compilation. JVM supports unsigned types just fine with wrappers. You're making the assumption that raw pointers and C like languages are a goal we should aim at. In reality it's not.

    Comment


    • #22
      Originally posted by Daktyl198 View Post
      Except that:
      1. this will most likely be faster than Java was at the time
      What do you mean by this? You're saying that in 2016, a new bytecode format on a 8-core 4 GHz machine with 64 GB of RAM performs faster than old JVM bytecode format on Pentium 2 MMX with 64 MB of RAM? No shit. If you look at JVM now, it performs really nicely despite horribly slow startup time (which is less than half a second with modern computers). Hardly any desktop computing task is so CPU bound that you'd need more speed and by that I mean CPU speed, not GPU acceleration. Most of the time, if Java applications are slow, it's not due to the bytecode format, there are other reasons like tons of garbage being collected (thank you, object oriented paradigm and mutable data).

      Originally posted by Daktyl198 View Post
      3. you can write your program in multiple languages then compile to WebAssembly as a target. You might be able to do this with java bytecode, but I don't think so yet.
      You haven't really used JVM, I'd assume. It supports tons of languages just fine. On top of the same bytecode, yes.

      Comment


      • #23
        Originally posted by Luke_Wolf View Post

        There's no reason that WebAssembly can't implement Accessibility features, and given that Google is the primary developer of the standard either the SEO side of things is already solved or will be.
        Actually, Google isn't the primary developer of the standard. As asm.js, it has been proposed by Mozilla.

        Comment


        • #24
          Originally posted by Luke_Wolf View Post

          which as a result means that instead of being able to implement a strict engine most browser engines actually implement 3 different engines
          There is a lot of legacy stuff out there but HTML5 actuallly does fix this because the handling of any problems in parsing the HTML is fully defined.

          If you've noticed modern content does look and work very well in different browsers the same way.

          Comment


          • #25
            Originally posted by caligula View Post

            What do you mean by this? You're saying that in 2016, a new bytecode format on a 8-core 4 GHz machine with 64 GB of RAM performs faster than old JVM bytecode format on Pentium 2 MMX with 64 MB of RAM? No shit. If you look at JVM now, it performs really nicely despite horribly slow startup time (which is less than half a second with modern computers). Hardly any desktop computing task is so CPU bound that you'd need more speed and by that I mean CPU speed, not GPU acceleration. Most of the time, if Java applications are slow, it's not due to the bytecode format, there are other reasons like tons of garbage being collected (thank you, object oriented paradigm and mutable data).

            You haven't really used JVM, I'd assume. It supports tons of languages just fine. On top of the same bytecode, yes.
            At work I regularly run some of those Java applets, it's total crap. Not on crappy hardware either.

            Much, much worse than anything written in HTML/JS/CSS in the last 10 years.

            The reason for this is that you are basically loading a large completely different runtime while you already have an existing runtime running which in case of HTML/JS/CSS is used directly, nothing else to load. I would almost go so far as saying it's similar to starting a new virtual machine on your desktop/laptop when you want to run some special application you can't run on your host operating system.

            That is why it takes a lot of time to load and loads a lot slower.

            This has always been the problem with Java applets. And people stopped building them. This isn't politics, it's pragmatism.

            Especially because the browser engines added more and more APIs.

            Not to mention the problem of security updates of plugins. Browsers these days get regular updates and this usually works well.
            Last edited by Lennie; 18 December 2015, 03:30 PM.

            Comment


            • #26
              Originally posted by carewolf View Post

              So is assembler, C++ and most other languages until you use a thread API. The thread API for JavaScript is called workers.
              Webworkers existed before asm.js and WebAssembly.

              When WebAssembly was introduced they also finally decided how to share data between Webworkers, so you can port existing multithreaded applications which shared data between threads. Basically you send a 'message' with a SharedArrayBuffer (basically a passing pointer for some shared memory).

              The other thing they are going to add is support for SIMD.

              And you have to remember WebAssembly isn't as slow as normal hand written Javascript. In certain dynamic language benchmarks asm.js is faster than Java or C/C++.

              But in much more realistic and some what recent benchmarks asm.js is about 1.5 x as fast (thus slower) than Java or C/C++.

              They say a large cause of that is because of missing support for SIMD, which is exactly what they are going to add in the near future (if they haven't already).

              Normally Javascript is garbage collected and no direct way to control when the garbage collection happens in browsers (Javascript engines).

              But asm.js/WebAssembly isn't like that, ahead of time compilation is used and (at least in Firefox) the result of the compilation is cached so loading a second time is much faster.

              So I do think WebAssembly does improve things.

              Comment


              • #27
                Originally posted by abral View Post

                Actually, Google isn't the primary developer of the standard. As asm.js, it has been proposed by Mozilla.
                Nope WebAssembly is the successor to PNaCl as a technology, whether Mozilla proposed it or not. Google just finally got Mozilla to agree to breaking the Javascript monopoly on the web.

                Originally posted by Lennie
                There is a lot of legacy stuff out there but HTML5 actuallly does fix this because the handling of any problems in parsing the HTML is fully defined.

                If you've noticed modern content does look and work very well in different browsers the same way.
                Actually no it doesn't. HTML5 Video has been a very troublesome beast as an easy example and what is far more noticeable is that less and less of the web has worked on alternative browsers since the rise of HTML5 while sites have remained much the same or gotten even worse. The reality is that further developing XHTML was the right way forward instead of them sticking to the stance that it should be forgiving for non-programmers.

                Comment


                • #28
                  Originally posted by caligula View Post
                  What do you mean by this? You're saying that in 2016, a new bytecode format on a 8-core 4 GHz machine with 64 GB of RAM performs faster than old JVM bytecode format on Pentium 2 MMX with 64 MB of RAM? No shit. If you look at JVM now, it performs really nicely despite horribly slow startup time (which is less than half a second with modern computers). Hardly any desktop computing task is so CPU bound that you'd need more speed and by that I mean CPU speed, not GPU acceleration. Most of the time, if Java applications are slow, it's not due to the bytecode format, there are other reasons like tons of garbage being collected (thank you, object oriented paradigm and mutable data).
                  Don't blame OOP and mutable data for your bytecode being bad, .NET manages to stay within a similar memory usage magnitude to C++ in comparable real world applications.

                  Comment


                  • #29
                    Originally posted by Luke_Wolf View Post

                    Don't blame OOP and mutable data for your bytecode being bad, .NET manages to stay within a similar memory usage magnitude to C++ in comparable real world applications.
                    .NET allows the use of stack variables and structs, unlike Java. Java promotes using objects for everything and yet makes it as slow as possible to use things like a Point3D object. Java code that actually wants to be FAST has to do ridiculous tricks like using a float Array of coordinates divided into groups of three instead of an Array of Point3D objects. Java GC has to check the world for pointers to every single Point3D object. While .NET can say "That thing in this class is a struct and must be copied so no references can exist."

                    Also, .NET applications seem to start much faster than Java.

                    Yes .NET is just better.

                    Comment


                    • #30
                      Originally posted by caligula View Post
                      Hardly any desktop computing task is so CPU bound that you'd need more speed and by that I mean CPU speed, not GPU acceleration.
                      These days software either has more CPU than it can possibly use, or the software can use as much CPU as you can provide and will always need more.

                      As we see with DirectX 12 the only use games had for a fast CPU for was to make up for a single-threaded GPU interface.

                      But applications like video processing, machine learning, voice recognition and natural language, gas simulations, etc., can use as much CPU as possible. They will use up a dual socket 10 core Xeon (40 threads!) or Xeon Phi (240 threads. Per card.) and beg for more.

                      Several of those ARE desktop computing. Who doesn't like to make home movies? Or have their email client scan for spam? (Although that is almost all done by the email provider's servers these days.) Or have your computer recognize your voice all Star Trek style. Devices like Google Now and Amazon Echo cheat by sending the audio to the cloud to avoid needing high powered local CPU. It would be a lot better for privacy if you didn't need to hand out voice audio to someone else's computers in order to play music from your local collection.

                      Comment

                      Working...
                      X