Announcement

Collapse
No announcement yet.

Linux RISC-V Preparing For Real-Time Kernel Support (PREEMPT_RT)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Mitch View Post
    Is it expected to have some advantages over ARM and x86?
    You don't need to pay some company a couple of million dollars to build & ship a CPU that runs standard software.

    Comment


    • #12
      Originally posted by ayumu View Post
      RISC-V is inevitable.

      RISC-V enables the best processors.

      RISC-V is rapidly growing the strongest ecosystem.
      What? You can't have drones with over locked dual socket 14900k? Only 1 kW of sustained power needed.

      Comment


      • #13
        Who knows what will happen in reality, but a couple of disappointments spring to mind about NOT fully open systems.
        The historical and especially current state of affairs wrt. x86/x86-64 is that it's a horrible mess of insecure spaghetti because the ISA is so legacy / evolved / complex and "not open" in the fine details of the execution environment that people are unaware of or surprised by or not thoughtful about all the little side-effects on the machine state of executing whatever instruction and there's some corner case that doesn't work or leaks sensitive data in some side-effect / "oops nevermind" speculation case or has different timing wrt. a crypto op or whatever.

        Similarly for ARM, same kinds of insecurities though less so, but also at a system level there's all the add-on IP cores / peripherals one can license from ARM but may or may not; that's orthogonal strictly from a RISC-V vs. ARM *core* consideration, but at a system level one cares about how clear the documentation and how good the BSP / library code & verification methods are for the whole system peripherals and overall execution environment.

        So now with ARM and of course X86 and MIPS and so on none of that is open enough that one gets free / good / fast / accurate behavioral models of what *really* (functionally or maybe even literally in some cases) happens when one executes some instruction or writes to some peripheral register or some interrupt happens etc. etc.
        Someone (qemu, ...) MAY or MAY NOT come along and write a behavioral model / simulator for the instruction set that is variously accurate or takes into account nuances in the machine state or not. Wwill QEMU / whatever accurately simulate a SPECTRE/MELTDOWN/whatever case et. al.? or things involving data leaks, timing variability, race conditions, ... I guess not unless someone ADDED that based on knowing the details (good luck) and thinking them important enough to model to some level.

        OTOH if you start with an open ISA and even open system verilog or whatever model of a CPU core, internal SOC peripherals, etc. then maybe you've got a nice library of HDL code / models that is good enough to MAKE the system down to the RTL level, but also hopefully free / open behavioral / timing / whatever models possible so one can actually automatically (i.e. the data is available, open, in machine readable / usable standard forms) accurately SIMULATE, VERIFY, MODEL the SOC / CPU / peripheral(s) at not only the instruction-does-this level (needed for IR to ASM optimization and ASM code generation from C etc. and debuggers, JTAG test & verification, whatever) but the "peripheral does this" and "SOC does this" levels.

        So finally we could (with an open or at least more available / open) system of standards based CORE + peripheral + SOC libraries it'd be easier to optimize, debug, simulate, verify, analyze, use ML tools to reason about / construct automated FORMAL PROOFS of design-by-contract "IT DEFINITELY WORKS" outputs at the SYSTEM LEVEL, use ML tools to OPTIMIZE, and help embedded / system engineers better avoid / understand hazards of race conditions / timing / synchronization / coherency / latency / throughput etc. etc. because those aren't just educated "it usually might mostly do X" things but calculable min / max / average facts of system architecture.

        So I'd be excited to be in a world where simulators work where one can formally prove the correctness of non-trivial HW / SW functional use cases at the system (CPU core + peripherals + memory + hardware interface) levels, and where one hasn't obfuscated the "how it works" documents for every damn CPU core and peripheral behind 5000 pages of unreadable PDF "technical manual", "data sheet", "application note" junk but one has machine readable / interpretable SPECIFICATIONS of how the stuff functions / behaves and one can then ask questions about what's necessary / sufficient to make X construct work (mutex, semaphore, lock free algorithm, packet / bus transaction that works in real time guaranteed, latency to safe the system after something triggers the FAULT / EMERGENCY STOP indicator, whatever you can imagine).

        Otherwise we're really living / working upon a tower of spaghetti built with reference to a tower of babylon of "documentation", "specification", "requirements" and the systems or even tiny pieces of them become more complex than a single human mind can comprehend / reason about so it's a whisper game of "yeah this works", "yeah this works", ... except what they MEAN is in X use case thing A works, in Y use case thing B works, etc. but at a HOLISTIC level
        NOBODY UNDERSTANDS how the SYSTEM might work in some complex / connected / corner case since nobody did the whole system model as a "design by contract" and covered every single latency / error case / whatever and verified the consistency of the overall system model / design to see that there wasn't a lot forgotten / swept under the run / ignored etc.

        And in particular if AIML systems are going to start being able to GENERATE / SYNTHESIZE and also PROGRAM systems it'd be nice to have an ecosystem of system meta-data, construction primitives, test / verification primitives, models, HDLs, HIL frameworks, whatever so that the entire system could be analyzed / synthesized to meet specification X without it being a sand mandala of minutia for a bunch of humans to plumb the plumbing or test the testing because the machine's can't reflect / introspect on the design / function of other machines or elaborate designs for better machines in an evolutionary way.

        Maybe I just created skynet, I should stop.

        But seriously, 10,000 page PDF sets of ISAs and peripheral register uses and address maps make my eyes / brain bleed.
        Please make it stop by having model & contract based system engineering / design / verification that actually works from the bus to the core.

        Originally posted by stormcrow View Post

        This is all potential arguments for the OEMs but I wonder if all that savings will actually pan out for those that purchase systems, be it industrial systems, smart phones, VA integrators, or the odd hobby user. The traditional model for these spaces is a known ISA that is well documented, say what you will about the IP behind them both x86_64 and ARM are very well documented and that's usually enough for the vast majority of end users. The rest is proprietary with varying degrees of documentation. Usually what programmers care about is reasonably well documented while the nuts and bolts the OEM's competitors care about aren't which constitutes a purposeful barrier to ripoffs - taking that engineering knowledge paid for by the OEM and effectively stealing it without contribution to the costs of R&D. The ripoff companies will therefore always have an unfair advantage over the companies investing in the R&D for new technology and engineering. This is not an unreasonable position for OEMs to take.

        Therefore, there's usually no immediate concrete benefit to a fully open system to the average programmer (who doesn't write OS driver code), integrator, or such as both systems are documented in ways they care about, I don't really see a practical benefit to these kinds of users beyond "oh! NEW SHINY!" and almost none at all to the typical end user - especially since I doubt those MFG/OEM cost savings on ISA royalties are going to be passed on to consumers in any meaningful way. MFG/OEMs are still going to try to distinguish their products with the rest of the stuff that goes into making a complete system, and those are very likely going to be just as legally encumbered, or have zero documentation and support from Chinese fly-by-nights as the current crop of systems on the market now. My point? I don't believe that RISC-V is going to be the knight-in-shining-armor to beat the incumbents into submission some people are crowing about. Open and royalty free ISAs are all over the place, and except for a notable few expensive small batch integrators (like RCS' Power9 & Power10-in-all-but-name), they all have largely fallen to the wayside (SPARC & MIPS- remains to be seen if Loongson is going to keep MIPS alive in any meaningful way) or have the same problems I pointed out RISC-V is going to face (like IBM's Power10 and any smartphone - there aren't any open cell modems at all and unlikely to ever be because they'll immediately be sued by Qualcomm).

        I'd love to see open and auditable systems become the norm rather than the exception, but I believe the market isn't going to fund it because the vast majority of people just don't care about the benefits of those systems and litigious incumbents like Qualcomm with huge patent portfolios that cover an entire industry standard aren't going to allow it (and if you think politicians will stop them, you've not been paying attention).

        Comment


        • #14
          RISC-V significantly lowers the barrier of entry to creating ASICs and other custom silicon. I think the HDD manufacturers use RISC-V on their controller chips now.
          ARM will always have a place in the foreseeable future unless they do something dumb.

          Comment


          • #15
            Originally posted by peterdk View Post
            SiFive is doing a massive layoff, so it might not be a very commercially viable space currently.
            yes i did read the same:

            " RISC-V: Layoffs at core designer SiFive
            SiFive is one of the largest RISC-V design companies. Now it is apparently laying off a large part of its workforce, including engineers.
            ​"In a second statement, SiFive confirmed that a fifth of the workforce will be laid off. As speculated, there should be around 130 people.​"
            "

            https://www-heise-de.translate.goog/...&_x_tr_hl=de&_ x_tr_pto=wapp
            Phantom circuit Sequence Reducer Dyslexia

            Comment


            • #16
              Originally posted by Mitch View Post
              I feel out of the loop. What's the deal with all the RISCV stuff lately? Is it expected to have some advantages over ARM and x86?

              Not doubting it. Just trying to keep up.
              x86 is an absolute monster of an architecture, based on a not-really-relevant-anymore isa design philosophy and saddled with 40 years of historic baggage, which excessively complicates processor design. Some people see risc5 as an opportunity to do a clean slate

              Comment


              • #17
                Originally posted by bachchain View Post

                x86 is an absolute monster of an architecture, based on a not-really-relevant-anymore isa design philosophy and saddled with 40 years of historic baggage, which excessively complicates processor design. Some people see risc5 as an opportunity to do a clean slate
                Closer to 50 years, actually. To be fair, the ARM architecture is just a decade younger, but also well in its thirties. However, it was already a major improvement, even back in the day.

                But still, I wonder what improvements RISC-V could offer over ARM, having started with a clean slate just short of a decade ago.

                Do modern revisions of the ARM architecture still carry a lot of legacy "baggage"? I have the idea that the stewards of the ARM architecture aren't as conservative as Intel and AMD when it comes to dropping legacy cruft, and have therefore managed to keep an already more modern architecture cleaner and leaner over the years. For one thing, there are already ARM cores on the market that don't even have native 32-bit compatibility anymore. Anybody have some insights on this?

                Comment


                • #18
                  The lack of perfs of RISC-V chips is not caused by the RISC-V instruction set in itself.
                  It's because of the lack of time and money poured into RISC-V implementations, FOR NOW.
                  If Apple or AMD or even scroogintel were putting billions in RnD into RISC-V chip, they would likely be more efficient than ARM since the ISA is even cleaner and more modern.

                  Comment

                  Working...
                  X