Announcement

Collapse
No announcement yet.

Rust 1.14 Released With Experimental WebAssembly Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Rust 1.14 Released With Experimental WebAssembly Support

    Phoronix: Rust 1.14 Released With Experimental WebAssembly Support

    The Rustlang developers have released Rust 1.14 in time for the holidays...

    http://www.phoronix.com/scan.php?pag...-1.14-Released

  • #2
    I've been watching a fellow Rust developer put together some blazingly fast web applications powered by Rust's web assembly support the past month. He's been completing all of the Advent of Code 2016 challenges with Rust's WebAssembly target. Effectively, we can now have web applications running as efficiently as native desktop applications with Rust. We can pretty much ditch Javascript for a majority of the front-end's logic and just focus on writing everything in Rust.

    Other critical features in this release is:

    1) HashMap will now be much faster due to being more cache-friendly now.
    2) Zip has been specialized so mapping and cloning on a zip will be faster.
    3) MIR improvements are speeding up compile times even further. We have decent compile times now.

    Some changes that will be great coming in the future is:

    1) A faster default sort method -- much faster in fact and using 1/4th as much memory.
    2) Much better padding and reorganization of fields in structs to reduce memory consumption.

    Comment


    • #3
      We also might be close to finally getting coroutines.

      https://github.com/rust-lang/rfcs/pull/1823

      Comment


      • #4
        Originally posted by mmstick View Post
        I've been watching a fellow Rust developer put together some blazingly fast web applications powered by Rust's web assembly support the past month. He's been completing all of the Advent of Code 2016 challenges with Rust's WebAssembly target. Effectively, we can now have web applications running as efficiently as native desktop applications with Rust. We can pretty much ditch Javascript for a majority of the front-end's logic and just focus on writing everything in Rust.

        Other critical features in this release is:

        1) HashMap will now be much faster due to being more cache-friendly now.
        2) Zip has been specialized so mapping and cloning on a zip will be faster.
        3) MIR improvements are speeding up compile times even further. We have decent compile times now.

        Some changes that will be great coming in the future is:

        1) A faster default sort method -- much faster in fact and using 1/4th as much memory.
        2) Much better padding and reorganization of fields in structs to reduce memory consumption.
        HTML still needs to die in a fire and be replaced with an actual standard for web applications to run as efficiently as a native desktop application, keep in mind that it requires a minimum of 3 engines internally in order to "correctly" parse HTML, one for conformant interpretation, another for loose interpretation, and another for quirks, and then there may be additional ones for compatibility with other browsers.

        Comment


        • #5
          Originally posted by mmstick View Post
          We can pretty much ditch Javascript…
          Ditch JavaScript, eh? I must find out more.

          Comment


          • #6
            Originally posted by Chewi View Post
            Ditch JavaScript, eh? I must find out more.
            Not completely, but mostly. WebAssembly is an emscripten technology and requires Javascript to import/execute it. Javascript can be used as glue code while machine code from compiled languages like Rust do all the heavy-lifting, sort of how Python is often used for on the desktop.

            Comment


            • #7
              Originally posted by Luke_Wolf View Post

              HTML still needs to die in a fire and be replaced with an actual standard for web applications to run as efficiently as a native desktop application, keep in mind that it requires a minimum of 3 engines internally in order to "correctly" parse HTML, one for conformant interpretation, another for loose interpretation, and another for quirks, and then there may be additional ones for compatibility with other browsers.
              Seems that would be simple to do if browser vendors were to work together on an ultimate standard. Basically, we would just need a new universal HTML standard with strict requirements (no special browser-specific features), and when the web browser encounters a webpage that has a tag telling the browser that it's the new standard, it chooses the correct HTML parsing engine internally.

              Comment


              • #8
                Originally posted by mmstick View Post

                Seems that would be simple to do if browser vendors were to work together on an ultimate standard. Basically, we would just need a new universal HTML standard with strict requirements (no special browser-specific features), and when the web browser encounters a webpage that has a tag telling the browser that it's the new standard, it chooses the correct HTML parsing engine internally.
                That would be great, but in principle that was called XHTML and much tantruming was had by web developers that weren't willing to write well formed XML documents so it got killed and we got stuck with the continued status quo with HTML5. So I'm not really hopeful on any sort of actual strict standard coming into place anytime soon as opposed to the "Don't Let the user see any errors" that it is now.

                Comment


                • #9
                  Originally posted by Luke_Wolf View Post

                  That would be great, but in principle that was called XHTML and much tantruming was had by web developers that weren't willing to write well formed XML documents so it got killed and we got stuck with the continued status quo with HTML5. So I'm not really hopeful on any sort of actual strict standard coming into place anytime soon as opposed to the "Don't Let the user see any errors" that it is now.
                  XHTML is a problem for code generation. You can't just copy and paste strings without keeping track of the DOM. People using the lesser languages such as PHP can't be bothered to learn anything new. They want web pages to simply work like any play text file written using latin1 encoding or some other 1980s code page.

                  Comment


                  • #10
                    Originally posted by Luke_Wolf View Post

                    HTML still needs to die in a fire and be replaced with an actual standard for web applications to run as efficiently as a native desktop application, keep in mind that it requires a minimum of 3 engines internally in order to "correctly" parse HTML, one for conformant interpretation, another for loose interpretation, and another for quirks, and then there may be additional ones for compatibility with other browsers.
                    It's not just html but the DOM (and css...but css itself might be salvageable). Until we get an alternative (not necessarily a replacement, but an alternate way structure your app.....frankly, if XHTML had been fully developed and adopted it might've done the trick) to that, we're just shoe horning a general application framework into a system designed for creating documents.
                    Regardless, web components are certainly moving things forward, and along with new layout engines coupled with hardware accelerated rendering, it is getting easier (react doesn't hurt either).

                    Comment

                    Working...
                    X