Announcement

Collapse
No announcement yet.

Miguel de Icaza Talks Up WebAssembly Greatness

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Volta View Post
    Another crap like mono from Icaza?
    Yup, definitely in the top 3 worst sell outs in OSS history fo sho.... No doubt...

    Comment


    • #22
      Originally posted by cjcox View Post
      As a FOSS developer, I guess I'm ok with WebAssembly if and only if the source code is provided so we aren't just randomly downloading and executing binary blobs from the Internet (which we all know is an incredibly safe place of noble do gooders).

      Edit: Maybe what is needed is a way of downloading the blob and executable with source in a way that they are signed and associated so that in a browser debugger you could trace through the program and know what it's doing, etc...

      In short, the obvious goals of WebAssembly seem nefarious. Need a way to fix that.

      Edit 2: I'd turn off wasm in your browser. Just saying. And I'd recommend running one of the many Javascript blockers so that you're not just willy nilly running untrusted code on your machine.
      The goals of webassembly are definitely nefarious for damn sure. Literally the -only- use for it is in malware.... Or at the very minimum it's the only use for it so far, even all these -years- later. Somehow I doubt very much use cases for it will suddenly and magically change...

      Comment


      • #23
        Originally posted by cjcox View Post
        As a FOSS developer, I guess I'm ok with WebAssembly if and only if the source code is provided so we aren't just randomly downloading and executing binary blobs from the Internet (which we all know is an incredibly safe place of noble do gooders).
        Well then you haven't been paying attention. WASM is something like ASN1 or protobuf. It's a representation of data and functions. It doesn't care if you put data into the format using Rust, Java, Swift or Python, thus it has no business dictating what's supposed to happen to the source code. That's up to the developer.
        Though, as pointed above, "decompiled" WASM is rather readable, so auditing the code shouldn't be a problem. It's just that there will be no pop-up with the source asking for your permission to download the code. Never.

        Comment


        • #24
          Originally posted by bug77 View Post
          Well then you haven't been paying attention. WASM is something like ASN1 or protobuf. It's a representation of data and functions. It doesn't care if you put data into the format using Rust, Java, Swift or Python, thus it has no business dictating what's supposed to happen to the source code. That's up to the developer.
          Though, as pointed above, "decompiled" WASM is rather readable, so auditing the code shouldn't be a problem. It's just that there will be no pop-up with the source asking for your permission to download the code. Never.
          It's just so weird how we've gone from "allow everything" to "allow nothing" (user controlled) back to "allow everything" again.

          Comment


          • #25
            Originally posted by Michael_S View Post
            But to be fair, WASM is the closest and most light weight solution so far to the "cross platform assembly code" problem. It's got a performance overhead, but the overhead provides that cross-platform compatibility and also protects against buffer overruns and stack smashing (but not use-after-free and race conditions). It has far less runtime overhead than, say, the JVM.
            Saying that something has far less runtime overhead than the JVM is like saying in a world of elephants that at least the elephants are much smaller than the one whale on the planet. It's true, but it doesn't give an accurate depiction of the ecosystem. Even .NET which is the JVM's main competitor is an elephant compared to that whale, because Sun Microsystems were dumbasses with the original design. A real world.NET program has about the same memory overhead as a similar real world C++ program. Which mostly boils down to two things. .NET lets you allocate on the stack, and .NET doesn't have an ass backwards generics system that can't handle primitive types and requires heap allocations to work around. I would presume though I haven't taken the time to look, that most other runtimes didn't follow the JVM down it's particular hole of stupid.

            Comment


            • #26
              Code:
              javascript.options.wasm [B]false[/B]

              Comment


              • #27
                This is what you get with WebAssembly:
                * Cross-platform, true write once / run anywhere. Can target browsers and native runtimes.
                * More direct support for multiple programming languages. Languages that compile to Javascript have become wildly popular, this opens more options and also provides faster startup because WebAssembly is designed so that it can be parsed, verified, and start execution before the download finishes. Also better runtime performance.
                * All the same sandboxing that you get with Javascript. For anyone that remembers, Java applets in the browser, Microsoft ActiveX, and Flash were security disasters. Javascript interpreters in browsers are pretty battle-tested against security threats by now.
                * Big downloads matter for websites users visit rarely, but for favorite programs it's no worse than installing a mobile application.

                This might be one of the most realistic ways to break the Windows/Android/iOS/Chrome OS stranglehold on consumer computing. WebAssembly doesn't require an app store and it's not limited to one platform. So if someone comes out with the next Fortnite, the next Minecraft, Excel, Matrix (the encrypted chat software), Bittorrent, Photoshop - anything - and it runs in WebAssembly then you can run it from any operating system with a WebAssembly runtime.

                Originally posted by Luke_Wolf View Post

                Saying that something has far less runtime overhead than the JVM is like saying in a world of elephants that at least the elephants are much smaller than the one whale on the planet. It's true, but it doesn't give an accurate depiction of the ecosystem. Even .NET which is the JVM's main competitor is an elephant compared to that whale, because Sun Microsystems were dumbasses with the original design. A real world.NET program has about the same memory overhead as a similar real world C++ program. Which mostly boils down to two things. .NET lets you allocate on the stack, and .NET doesn't have an ass backwards generics system that can't handle primitive types and requires heap allocations to work around. I would presume though I haven't taken the time to look, that most other runtimes didn't follow the JVM down it's particular hole of stupid.
                As far as I understand it - and correct me if I'm wrong - the JVM does stack allocations just fine, it handles primitives well in terms of efficiency but (as you said) the syntax for working with them is terrible, and the biggest bit of memory overhead is loading the whole standard library at startup. So if you write code to work with, say, a list of integers in Java you could match .NET or even C++ for speed with the right syntax but the default option is List<Integer> which is far less memory and CPU efficient.

                Comment


                • #28
                  Originally posted by Michael_S View Post
                  As far as I understand it - and correct me if I'm wrong - the JVM does stack allocations just fine,
                  So if it's a primitive type it's loaded on the stack, if it's a class type it's loaded on the heap. So already we start out with some caveats where "fine" means "only works with primitive types." whereas .NET, Rust, and C++ do not have this limitation.

                  In .NET anything that's a value type, which includes both primitive types and user defined types that are declared as structs and must follow certain rules (which in principle allows value types to be CoW internally) is stack allocated and is handled with pass by value semantics, whereas anything that is a reference type (denoted by the class keyword) is allocated on the heap and follows pass by reference semantics. This covers most actual use cases, (and if CoW is done then can arguably be faster and more efficient than normal stack allocation).

                  In C, C++, or Rust you can declare any type as stack or heap allocated.

                  I'm not versed enough on other languages/runtimes to comment on their inner workings in this regard.

                  Originally posted by Michael_S View Post
                  it handles primitives well in terms of efficiency but (as you said) the syntax for working with them is terrible, and the biggest bit of memory overhead is loading the whole standard library at startup. So if you write code to work with, say, a list of integers in Java you could match .NET or even C++ for speed with the right syntax but the default option is List<Integer> which is far less memory and CPU efficient.
                  The trouble is that you can't have a List<int>, because Sun Microsystems were morons. If you want to have that you've got to use an int[] and deal with all the limitations of using an array rather than a container because they decided to create an artificial distinction between objects (heap allocated) and primitive types (stack allocated) in their type system. This artificial distinction means that the way they implemented generics (which basically involves casting to Object) forces you to use a wrapper type like Integer to get container semantics, which not only means we're kicking out primitive types to the heap, but also doubling the memory required (in the best case) because now we have an additional pointer on the heap per integer because your List<Integer> is really a List<int*> under the covers.

                  Everything else... Doesn't have this problem. In .NET everything is an Object and it's generics are also more intelligently designed (like for that matter basically everything that differs between it and the JVM which is not really so much learning from Java as much as that Anders Hejlsberg is one of the best language designers out there period), so not a problem there.

                  In Rust and C++ generics are a compile time rather than runtime construct, and their monomorphization handles everything just fine.

                  Comment


                  • #29
                    IMO WASM is a miss.
                    It descended from NaCl, but failed to capture its maint point - possibility of native run code inside VM. Such as it is, WASM can't touch native performance in typical case, let alone majority of them. Code has to be compiled into bytcode, which is then interpreted. It's optimized for pšerformance within those constraints, so it shouldl be better than JS, but not that much.

                    To top it off, it still needs friggin JS, at least with browser. So it can't even be used instead of JS.
                    Looks like a crap pile to me.

                    OTOH, I like Rust. I know different tool for different purpose, but I think it has a chance to capture a niche that Wasm failed to grasp.
                    I think this show in the development speed. Rust progresses steadily through milestones. Wasm makes a newslet here and there.

                    Comment


                    • #30
                      Originally posted by bash2bash View Post
                      At the moment, almost all WebAssembly code around the internet is for malicious code.
                      And you're trying to imply that that isn't true for Javascript?

                      Comment

                      Working...
                      X