Announcement

Collapse
No announcement yet.

Rust For The Linux Kernel Updated, Uutils As Rust Version Of Coreutils Updated Too

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by moltonel View Post
    Using rustc's tier criterias, Gcc is at tier 3 for all arches
    That seems hard to believe, when was the last time that a stable gcc failed to compile to valid x86 code or failed its unit tests?

    (if that seems hard to believe, pick any arch and look for the official level of support)
    I picked x86, where would I need to look? Google doesn't know either.

    PS: I hope you're not hanging on the fact that gcc doesn't provide binarys for download. You know they build and test it before they release a stable source code with build instructions?

    Comment


    • #32
      Originally posted by sinepgib View Post
      Is this true? I thought compiling the kernel with LLVM was still an ongoing effort.
      https://www.kernel.org/doc/html/latest/kbuild/llvm.html Mentions Android, ChromeOS and OpenMandriva. LLVM supports LTO and some unique security features, while Gcc supports extra niche archs and a different set of security features. Don't confuse "LLVM is not the default" with "LLVM is not production-quality". Linux and Clang and Gcc are all ongoing efforts.

      Comment


      • #33
        Originally posted by ClosedSource View Post
        When it reaches a stage where there aren't new versions of the compiler or standard library every two months that can break random code you did not write which compiled two months ago.
        Good news! Rust has had a backwards compatibility guarantee in place since 2015.

        Comment


        • #34
          Originally posted by moltonel View Post

          https://www.kernel.org/doc/html/latest/kbuild/llvm.html Mentions Android, ChromeOS and OpenMandriva. LLVM supports LTO and some unique security features, while Gcc supports extra niche archs and a different set of security features. Don't confuse "LLVM is not the default" with "LLVM is not production-quality". Linux and Clang and Gcc are all ongoing efforts.
          Don't confuse a question with a judgement about the quality of LLVM
          I'm pretty aware it's production ready and prefer it wherever I can. But I've been following news about advancements in its use for the kernel and that led me to think it wasn't currently working fully for that. The "ongoing effort" mention was specifically about building the kernel, BTW. Since GCC is the default toolchain that pretty much makes it not an ongoing effort (in that department).

          Comment


          • #35
            Originally posted by Anux View Post
            That seems hard to believe, when was the last time that a stable gcc failed to compile to valid x86 code or failed its unit tests?
            I'm not saying that gcc's x86 backend is bad, I'm saying that the official guarantees are the same for all arches, namely nothing more than "there's support code for it" (aka "tier 3" as defined by rustc). Of course in practice common archs like x86 are very well tested and higher-quality than niche archs (though some regressions do slip by), but there's no hint about which arches are of lower quality and how bad they are.

            Is rustc's thumbv7neon-unknown-linux-musleabihf "Tier 3" support better than gcc's "yes" support ? Good luck assessing that. This is where comments like "it's only tier 3 so it's not good enough" fall apart.


            I picked x86, where would I need to look? Google doesn't know either.
            I've searched before and couldn't find much. I'd love to be proven wrong.

            PS: I hope you're not hanging on the fact that gcc doesn't provide binarys for download. You know they build and test it before they release a stable source code with build instructions?
            The lack of official build is part of the reason, but the main issue I see is that testing is async and distributed: AFAIU patches are sent to mailing lists, the developer and reviewers do their due testing diligence, and then it gets merged and various actors test it on a wider range of targets. Regressions get reported, prioritized and hopefully fixed.

            Compare that to rustc where merge requests are sent to github, where the CI will build tier 1/2 platforms and test tier 1 platforms. The request cannot be merged if the CI fails. If it is thought that tests might not cover the changes fully, a compile-and-test of all crates.io projects can be requested. After the merge, just like gcc, niche arch users will test the latest code and report regressions. Contrary to gcc, it's trivial to use the latest nightly compiler (and switch back to an older nightly or even to stable while your regression gets addressed).

            I don't want to belittle gcc's platform support, it's certainly wider than rustc's. But it's hard to assess the quality of different platforms, and overall it seems less trustworthy than rustc's "tier 1".
            Last edited by moltonel; 24 May 2022, 11:40 AM.

            Comment


            • #36
              Originally posted by sinepgib View Post
              Don't confuse a question with a judgement about the quality of LLVM
              I'm pretty aware it's production ready and prefer it wherever I can. But I've been following news about advancements in its use for the kernel and that led me to think it wasn't currently working fully for that. The "ongoing effort" mention was specifically about building the kernel, BTW. Since GCC is the default toolchain that pretty much makes it not an ongoing effort (in that department).
              I did understand your question was about the state of clang to build linux, not about llvm in general And my answer was that it's working fully, and is arguably the most common kernel compiler for devices in people's homes due to android's market share.

              I'm not sure what you mean by "ongoing effort" : building Linux with Clang has been as easy as calling `LLVM=1 make ...` for years. Clang and Gcc still get kernel-related changes every now and then, just as Linux gets Clang/Gcc-related patches. One compiler needs to be the default, but they're both equally well supported at this stage.

              Comment


              • #37
                Originally posted by moltonel View Post

                I did understand your question was about the state of clang to build linux, not about llvm in general And my answer was that it's working fully, and is arguably the most common kernel compiler for devices in people's homes due to android's market share.

                I'm not sure what you mean by "ongoing effort" : building Linux with Clang has been as easy as calling `LLVM=1 make ...` for years. Clang and Gcc still get kernel-related changes every now and then, just as Linux gets Clang/Gcc-related patches. One compiler needs to be the default, but they're both equally well supported at this stage.
                Yes. I'm talking in past here. I thought it was ongoing effort, which the link usefully contradicted

                Comment


                • #38
                  Originally posted by sinepgib View Post

                  Yes. I'm talking in past here. I thought it was ongoing effort, which the link usefully contradicted
                  Even before it got upstreamed I think Google was using it in their Android forks. That was relatively easy since they can throw away 90% of the kernel as irrelevant and only needed to add LLVM support to the bits that actually run on their phones.

                  Comment


                  • #39
                    Originally posted by moltonel View Post
                    Is rustc's thumbv7neon-unknown-linux-musleabihf "Tier 3" support better than gcc's "yes" support ? Good luck assessing that. This is where comments like "it's only tier 3 so it's not good enough" fall apart.
                    I see that, gcc might have better support but we can't be shure and rucstc gives us definit answers. But I think that those tiers are there for a reason, if we know that builds or unit tests are not guaranteed we should act acordingly and not say "yeah its never been better with gcc so who cares". Because rust is about guaranteed code quality and thats the reason it gets in the kernel. An opportunity to make everything saver and better so to speak.
                    I don't want to belittle gcc's platform support, it's certainly wider than rustc's. But it's hard to assess the quality of different platforms
                    Thanks for your clarifications, now I understand what you wanted to say and I mostly agree with that.
                    and overall it seems less trustworthy than rustc's "tier 1".
                    I'm a rust fan and also see the value in autobuilds and tier. Also building rustcode with rustc has a much tighter implication than C++ with gcc (ownership check at compile time). Still I wouldn't see it as less trustworthy since gcc gets checked by hundreds of distros that build even more packages on a regular basis with it and therefore provide a huge quality control.

                    Comment


                    • #40
                      Originally posted by moltonel View Post

                      I'm not saying that gcc's x86 backend is bad, I'm saying that the official guarantees are the same for all arches, namely nothing more than "there's support code for it" (aka "tier 3" as defined by rustc). Of course in practice common archs like x86 are very well tested and higher-quality than niche archs (though some regressions do slip by), but there's no hint about which arches are of lower quality and how bad they are.

                      Is rustc's thumbv7neon-unknown-linux-musleabihf "Tier 3" support better than gcc's "yes" support ? Good luck assessing that. This is where comments like "it's only tier 3 so it's not good enough" fall apart.




                      I've searched before and couldn't find much. I'd love to be proven wrong.



                      The lack of official build is part of the reason, but the main issue I see is that testing is async and distributed: AFAIU patches are sent to mailing lists, the developer and reviewers do their due testing diligence, and then it gets merged and various actors test it on a wider range of targets. Regressions get reported, prioritized and hopefully fixed.

                      Compare that to rustc where merge requests are sent to github, where the CI will build tier 1/2 platforms and test tier 1 platforms. The request cannot be merged if the CI fails. If it is thought that tests might not cover the changes fully, a compile-and-test of all crates.io projects can be requested. After the merge, just like gcc, niche arch users will test the latest code and report regressions. Contrary to gcc, it's trivial to use the latest nightly compiler (and switch back to an older nightly or even to stable while your regression gets addressed).

                      I don't want to belittle gcc's platform support, it's certainly wider than rustc's. But it's hard to assess the quality of different platforms, and overall it seems less trustworthy than rustc's "tier 1".
                      You could take a look at rustc_codegen_gcc https://github.com/rust-lang/rustc_codegen_gcc

                      It's a gcc codegen backend for rustc, and during its last progress report, its owner says something like it will be added to rustup in the future.

                      Not sure whether they will distribute gcc along with the codegen backend, but they will definitely have to distribute libgccjit, as it is patched and not yet upstreamed.

                      If rustup could distribute gcc + codegen backend for gcc, then the testing of Gcc in rustc could be much easier and might be one day tier 1, as every rust crate could use this new backend to run CI and CD.

                      Comment

                      Working...
                      X