Announcement

Collapse
No announcement yet.

Rust-Written Redox OS 0.8 Released With i686 Support, Audio & Multi-Display Working

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by xfcemint View Post

    You are presenting a wrong order-of-importance. The ability of a microkernel to restart services is not a very significant feature of microkernels. Instead, thare are many other benefits of microkernels, but the list is just too long to be written down.

    Regarding your argument, what you fail to realize is that a microkernel is exatcly the right tecnology to "get things right from the 1st attempt". In a monolithic kernel, you need endles patching. A microkernel is a bug-detection device, it enables developers to track down the chain of privileges and to easily figure out "who is responsible" for a bug. A microkernel won't stop bugs from happening, but it is certainly able to automatically detect a whole lot of them.
    I can´t even imagine how this will work out, so now we have microdebuglinux to run linux or what ? and then some smart guy codes a module microdebuglinuxtolinux.ko and now e have introduced another error tracking error to our error prone kernel and run a microkernel on top?

    Comment


    • #52
      Originally posted by erniv2 View Post

      Why code a compatibily layer to an existing kernel, instead of coding inside the kernel itself, thats what is waste of time, ah my cpu is buggy, lets code some microcode patches, ah my microcode is buggy, lets code some kernel patches, ah my kernel is buggy lets code some userspace workarounds.

      The thing would be get the things right at the 1st place and a microkernel won´t stop bugs from happening i posted this before even with a microkernel if an error happens at a lower layer like caching then all the modules that run on top of it need to be reset and in the worst case the error persist and you need to bluescreen and reboot.

      The more layers the more errors, who tells you that a wrapper between cpu microcode and the excution layer can catch the bugs ? It´s more possible to cause a fatal error, or be exploited even more.
      I've said the same thing about the modern computer designs where it's like "hey, we can do a very small amount of processing very fast with this super fast memory sitting on the CPU die, but everything else is slower, because, price or some other reason...." Memory upon memory upon memory upon memory, whether it's called "cache" or "storage". It's an excuse that is keeping computer designs behind in innovation.

      Comment


      • #53
        Originally posted by rclark View Post
        How I see it is... This really will help test/'validate Rust for 'real' use. Hopefully a Rust ANSI standard will be coming soon that developers can compile against as not to end up chasing their tail so to speak when Rust language developers decide to break things for a better mouse trap.

        Neat project!
        Completely false. For more information, see: https://blog.m-ou.se/rust-standard/

        1) Rust *very rarely* breaks backwards compatibility. It just doesn't happen. New features become available, but I can only think of a couple of times the bahavior of a feature has changed.

        They ensure this through constant testing of real-world code, just in case someone was relying on a behavior that was undocumented. Every single compiler release is build tested throughout development against every single crate on crates.io and every single rust project on github. They also run all of the tests for every one of those repos. This provides enormous feedback of what people are actually doing and what strange usages of the language they need to support.

        2) even when they DO break backwards compatibility with an older version of the language, the latest version of the compiler ALWAYS maintains compatibility with all past versions of the language through Editions. A compiler that came out yesterday will support every single version of the language from any point in the past.

        3) the standardization process for Rust is FAR, FAR more stringent than any ANSI C standard ever.

        3a) the C standard doesn't do ANYTHING to tame the implementation of C compilers and tooling. The standard follows the implementations, not the other way around. The standard is an attempt to document whatever happens to be common between already existing implementations, and any place where they diverge is firmly "undefined behavior." This has happened more times than I think anyone here is comfortable admitting.

        3b) the rust standardization process is LONG. After an enormous and meticulous proposal and acceptance process, every new feature ships in the nightly compiler build for potentially months or years to gauge how people actually use it. It is only when all of the corner cases have been firmly ironed out that it is finally "stabilized" and appears in the next compiler release. This entire process exists because Rust's developers KNOW that once a feature finally lands in a Rust release they will be supporting it FOREVER.

        Comment


        • #54
          Originally posted by cooperate View Post
          It’s MIT licensed, so it’s not going anywhere.
          It's MIT licenced, so it's going EVERYWHERE.

          It's absolutely no secret that the open source world is moving firmly in the direction of permissive licences. They're the ones companies feel more comfortable using and contributing back to. Look at the statistics on github and you will see nothing but more and more important and influential projects being licenced as MIT and fewer and fewer being GPL.

          However you may feel about the possibility a someone somewhere might *not* contribute back, the statistics are an objective fact.

          Comment


          • #55
            Originally posted by Classical View Post
            I absolutely do not understand the popularity of Rust. Some reasons why I wouldn't use Rust:

            1. Memory safety. This is supposedly the big reason why we should use Rust. The CHERI memory-protection features historically allow memory-unsafe programming languages such as C and C++ to be adapted to provide strong, compatible, and efficient protection against many currently widely exploited vulnerabilities.

            2. Performance, Rust is slow as a snail in real programs compared to C:​ https://ehnree.github.io/documents/papers/rustvsc.pdf
            Generally speaking, C still dominates Rust by a relatively wide margin in terms of execution time. Consequently, there is definitely a performance cost associated with using Rust instead of C.
            https://renato.athaydes.com/posts/re...sp-part-2.html
            Last, but not least, notice how Common Lisp is the fastest language of all, beating even Rust, on the smaller runs (and notice that this is no hello-world, it loads over 70,000 words into a hash-table, then encodes 1,000 phone numbers using those words - all of that in a mere 59ms, well ahead of Rust, somehow, which needs 89ms!).

            3. Ease of use: https://ehnree.github.io/documents/papers/rustvsc.pdf
            However, there are still some annoyances about Rust that can, at times, make implementing a simple algorithm difficult.

            What's the point of Rust?
            CHERI is all about fixing things after the fact, with runtime overhead that's pushed down into hardware. Rust is about prevention.

            Rust is not slower than C. It's a well-proven fact at this point. The brand new Rust driver for NVMe for linux? It's within two percentage points of the aggressively-optimized C implementation without any optimization work at-all. Right now Rustc is forcing LLVM to fix optimization paths that are too buggy to use, simply because they were impossible to use in C. Trying to enable them would generate hopelessly buggy code because C code can't be analyzed reliably enough to determine if the optimizations are safe.

            Comment


            • #56
              Originally posted by cl333r View Post


              To me it is suspicious that they didn't include "performance" as part of their main goals, they might not be able to closely match Linux performance-wise later on.
              the Big Kernel Lock would like a word.

              Comment


              • #57
                Originally posted by Classical View Post
                I absolutely do not understand the popularity of Rust. Some reasons why I wouldn't use Rust:
                ...
                2. Performance, Rust is slow as a snail in real programs compared to C:​ https://ehnree.github.io/documents/papers/rustvsc.pdf
                That benchmark seems to be 7 years old. Don't you think that the benchmark is outdated by now? Why don't you show a recent benchmark?

                Comment


                • #58
                  Originally posted by oleid View Post
                  Most of the time C and rust performance is close enough, in others rust wins by a large margin https://benchmarksgame-team.pages.de...ust-clang.html
                  Did you notice that when Rust wins it still (almost) always uses more memory than C? (mem=>memory, right?)

                  Comment


                  • #59
                    Originally posted by xfcemint View Post

                    I'm having trouble understanding what you have said, but if I'm interpreting it correctly, then you have something against CPU caches.

                    If that is the correct interpretation of your words, then you are completely wrong, even worst, it is a display of total ignorance.

                    If that was supposed to be some kind-of argument against microkernels, then it is wrong, but in terms of significance it is just an ant compared to an elephant of ignorance about caches.
                    The highest level of memory is waiting on slower levels of memory below that. Fact check that.

                    Comment


                    • #60
                      Originally posted by Waethorn View Post

                      The highest level of memory is waiting on slower levels of memory below that. Fact check that.
                      Technically correct, but completely and utterly missing the point to an embarrassing degree.

                      Comment

                      Working...
                      X