Originally posted by mdias
View Post
Announcement
Collapse
No announcement yet.
Work Continues On WebAssembly For Low-Level, In-Browser Computing
Collapse
X
-
- Likes 1
-
Originally posted by Daktyl198 View PostExcept that:
1. this will most likely be faster than Java was at the time
Originally posted by Daktyl198 View Post3. you can write your program in multiple languages then compile to WebAssembly as a target. You might be able to do this with java bytecode, but I don't think so yet.
- Likes 1
Comment
-
Originally posted by Luke_Wolf View Post
There's no reason that WebAssembly can't implement Accessibility features, and given that Google is the primary developer of the standard either the SEO side of things is already solved or will be.
Comment
-
Originally posted by Luke_Wolf View Post
which as a result means that instead of being able to implement a strict engine most browser engines actually implement 3 different engines
If you've noticed modern content does look and work very well in different browsers the same way.
Comment
-
Originally posted by caligula View Post
What do you mean by this? You're saying that in 2016, a new bytecode format on a 8-core 4 GHz machine with 64 GB of RAM performs faster than old JVM bytecode format on Pentium 2 MMX with 64 MB of RAM? No shit. If you look at JVM now, it performs really nicely despite horribly slow startup time (which is less than half a second with modern computers). Hardly any desktop computing task is so CPU bound that you'd need more speed and by that I mean CPU speed, not GPU acceleration. Most of the time, if Java applications are slow, it's not due to the bytecode format, there are other reasons like tons of garbage being collected (thank you, object oriented paradigm and mutable data).
You haven't really used JVM, I'd assume. It supports tons of languages just fine. On top of the same bytecode, yes.
Much, much worse than anything written in HTML/JS/CSS in the last 10 years.
The reason for this is that you are basically loading a large completely different runtime while you already have an existing runtime running which in case of HTML/JS/CSS is used directly, nothing else to load. I would almost go so far as saying it's similar to starting a new virtual machine on your desktop/laptop when you want to run some special application you can't run on your host operating system.
That is why it takes a lot of time to load and loads a lot slower.
This has always been the problem with Java applets. And people stopped building them. This isn't politics, it's pragmatism.
Especially because the browser engines added more and more APIs.
Not to mention the problem of security updates of plugins. Browsers these days get regular updates and this usually works well.Last edited by Lennie; 18 December 2015, 03:30 PM.
Comment
-
Originally posted by carewolf View Post
So is assembler, C++ and most other languages until you use a thread API. The thread API for JavaScript is called workers.
When WebAssembly was introduced they also finally decided how to share data between Webworkers, so you can port existing multithreaded applications which shared data between threads. Basically you send a 'message' with a SharedArrayBuffer (basically a passing pointer for some shared memory).
The other thing they are going to add is support for SIMD.
And you have to remember WebAssembly isn't as slow as normal hand written Javascript. In certain dynamic language benchmarks asm.js is faster than Java or C/C++.
But in much more realistic and some what recent benchmarks asm.js is about 1.5 x as fast (thus slower) than Java or C/C++.
They say a large cause of that is because of missing support for SIMD, which is exactly what they are going to add in the near future (if they haven't already).
Normally Javascript is garbage collected and no direct way to control when the garbage collection happens in browsers (Javascript engines).
But asm.js/WebAssembly isn't like that, ahead of time compilation is used and (at least in Firefox) the result of the compilation is cached so loading a second time is much faster.
So I do think WebAssembly does improve things.
- Likes 1
Comment
-
Originally posted by abral View Post
Actually, Google isn't the primary developer of the standard. As asm.js, it has been proposed by Mozilla.
Originally posted by LennieThere is a lot of legacy stuff out there but HTML5 actuallly does fix this because the handling of any problems in parsing the HTML is fully defined.
If you've noticed modern content does look and work very well in different browsers the same way.
- Likes 1
Comment
-
Originally posted by caligula View PostWhat do you mean by this? You're saying that in 2016, a new bytecode format on a 8-core 4 GHz machine with 64 GB of RAM performs faster than old JVM bytecode format on Pentium 2 MMX with 64 MB of RAM? No shit. If you look at JVM now, it performs really nicely despite horribly slow startup time (which is less than half a second with modern computers). Hardly any desktop computing task is so CPU bound that you'd need more speed and by that I mean CPU speed, not GPU acceleration. Most of the time, if Java applications are slow, it's not due to the bytecode format, there are other reasons like tons of garbage being collected (thank you, object oriented paradigm and mutable data).
- Likes 1
Comment
-
Originally posted by Luke_Wolf View Post
Don't blame OOP and mutable data for your bytecode being bad, .NET manages to stay within a similar memory usage magnitude to C++ in comparable real world applications.
Also, .NET applications seem to start much faster than Java.
Yes .NET is just better.
Comment
-
Originally posted by caligula View PostHardly any desktop computing task is so CPU bound that you'd need more speed and by that I mean CPU speed, not GPU acceleration.
As we see with DirectX 12 the only use games had for a fast CPU for was to make up for a single-threaded GPU interface.
But applications like video processing, machine learning, voice recognition and natural language, gas simulations, etc., can use as much CPU as possible. They will use up a dual socket 10 core Xeon (40 threads!) or Xeon Phi (240 threads. Per card.) and beg for more.
Several of those ARE desktop computing. Who doesn't like to make home movies? Or have their email client scan for spam? (Although that is almost all done by the email provider's servers these days.) Or have your computer recognize your voice all Star Trek style. Devices like Google Now and Amazon Echo cheat by sending the audio to the cloud to avoid needing high powered local CPU. It would be a lot better for privacy if you didn't need to hand out voice audio to someone else's computers in order to play music from your local collection.
- Likes 1
Comment
Comment