No announcement yet.

LLVM 16.0 Released With New Intel/AMD CPU Support, More C++20 / C2X Features

  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by pkese View Post
    Also Xtensa architecture (famous for ESP8266 / ESP32 WiFi enabled microcontrollers) support is getting first LLVM patch inclusion.

    It'll take a while though before one can compile Rust with a mainline checkout of LLVM compiler.
    In the meanwhile the workaround is to download Espressif's LLVM fork.
    That'll be great for the long tail of support needed for these old parts.

    The new ones (such as C3 and C6, and the future in general, as they made a public statement on the matter) use RISC-V, which support is also improving in both LLVM and GCC.


    • #12
      Originally posted by JacekJagosz View Post
      Mold author chose this license, because he wanted to actually get financial support for all the work he has done, from big companies using his project. I totally understand that, and distributions can still use it, sounds good to me.
      That's good for him. But it's limiting use downstream. No, they can't. That's the point of his comment.


      • #13
        Originally posted by Lycanthropist View Post
        I don't care about the exact license as long as the source is available. I care about linking speed.
        For downstream projects utilizing the linker the license very much does matter.


        • #14
          Originally posted by brad0 View Post

          For downstream projects utilizing the linker the license very much does matter.
          Might be. But not for me.


          • #15
            Still trying to troubleshoot why ROCM isn’t working with GFX701 and GFX702 even though the LLVM 16 documentation shows they should work fine with:
            • rocm-amdhsa
            • pal-amdhsa
            • pal-amdpal
            ​Have some older FirePro cards and R9 390X cards in a box that were used for mining once upon a dream, but now want to repurpose for image generation or a local LLM. Just… stuck with getting them to actually work with the ROCM stack.

            It’s been low priority but they’ve been sitting in this box looking at me for 6 months.