Announcement

Collapse
No announcement yet.

It Turns Out RISC-V Hardware So Far Isn't Entirely Open-Source

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Spooktra View Post
    This story, in one fell swoop, explains why Linux on the desktop never has, and never will, made any real inroads into breaking the Windows monopoly that exists in personal computers.
    Yet, still, except for you desktop computer and laptop...

    Originally posted by Spooktra View Post
    This is coming from a devout Linux user, I'm typing this on a system running Solus
    ...well, okay maybe not yours personally (and mine neither, running openSUSE here). But the avarage users's...

    But nearly all device an average user interacts with will run Linux kernel (from the firmware inside the Wifi router to the servers which handles the pages that this users is websurfing on), with BSD also having some presence (Mac OS X/iOS, recent PlayStation, some other routers, etc.).

    Linux (and others like BSD) have managed to make inroads into everyday use, including onto devices that have taken over a lot of the usage formerly restricted to desktop/laptops (smartphones, be it running Android/Linux or iOS' Darwin kernel).
    Maybe install base of Linux (and co) is rising dramatically on desktops, but the usage has shifted toward devices running it.

    And it all comes down to customisability, modularity, choice, etc.
    While there are extremists wanting for RMS-levels of code purity, there are more pragmatists group.
    There are constructor thinking that current state works "well enough" for them and will already take and slap a linux kernel on all the above mentioned device. Linux' openness isn't an all-or-nothing approach, there's lot of already usable thing despite what the GPL extremists will think.
    The epitome of this being Linus' own approach, and the more philosophical reasons of NOT switching to GPLv3 - wanting not to impede too much on industry's possibility to explore and use Linux.
    (in addition to the practical reasons of not wanting to track down the author of every single line of code to ask for a relicense).

    (And that's also the path RISC-V is taking. One could wait yet another decade until every last single transistor of the SoC is opensource. Or one could *already today* slap an open RISC-V core together with some closed source memory controller witchcraft.).

    Originally posted by schmidtbag View Post
    People need to ask themselves how making something open-source will benefit them personally. {...} Of course, there are hundreds of examples where being open-source is far more beneficial, but it isn't unanimously beneficial, and that's what Stallmanites just can't seem to grasp.
    ESR's "Linus law" applies.
    Despite being much more omnipresent than Windows the above mentionned Linux devices aren't hacked as often as one would be thinking.
    There are lot of reasons for this (diversity making it a harder target), but among them is security fixes/debugging, etc.
    Linux (nor BSD) aren't necessarily magic-pixie security dust. But compared to the horrendous swamp that Windows has always been, at least Linux and BSD have a normal security like any decent OS should (cue in people insisting that their openBSD is better than average).
    That's mostly due that, due to open process, much more companies can pour efforts to make Linux (both kernel itself and userland) to keep it at least decently secure, and can leverage each other's efforts.
    By making Linux open , it gives possibility of making a OS that isn't as bad as Windows.

    The same could be happening with CPUs : at a time when speculative execution vulnerabilities (i.e.: CPU hardware vulnerabilities) keep increasing number almost as fast as Sharknado sequels, having an open core that many more researchers can analyse outside Intel's lab will certainly help.

    (Or in the specific case of the RISC-V's closed DRAM controller : help eventually avoiding rowhammer style attacks).

    So eventually, we might end up getting there. One day. That's why "Stallmanites" are still important people to help us pull in that direction.

    In the meantime current RISC-V SoCs are still a good intermediate thing, to get actual hardware being shipped and getting exposure. That's the advantage of choice and open source.

    Comment


    • #32
      Originally posted by starshipeleven View Post
      It matters once one of the many conditions you mentioned isn't met anymore.
      Right, which is why I concluded my post with "there are hundreds of examples where being open-source is far more beneficial". Don't get me wrong, I myself prefer open source, but my point is it's not always the best approach to all software. It can be (in fact, it usually is), but it isn't always, and diehard FLOSS fans need to understand this.
      Really, why are you taking a perfect situation and asking why would someone want safeguards in case something goes wrong? What kind of perverse reasoning is that?
      I'm not. I'm only saying there are situations where software can be closed-source and properly maintained, so it doesn't matter if it is closed. I am not implying that such situations are better off being closed, but rather it isn't a significant detriment.
      That's one of the examples of applications that run like shit and could use some community support.
      It runs perfectly smoothly for me. Sure, it's bloated and picky at times, but it works well enough. If it were open-sourced, great - everyone wins. But to me, it doesn't need to be open-sourced.

      Originally posted by DrYak View Post
      Linux (nor BSD) aren't necessarily magic-pixie security dust. But compared to the horrendous swamp that Windows has always been, at least Linux and BSD have a normal security like any decent OS should (cue in people insisting that their openBSD is better than average).
      That's mostly due that, due to open process, much more companies can pour efforts to make Linux (both kernel itself and userland) to keep it at least decently secure, and can leverage each other's efforts.
      By making Linux open , it gives possibility of making a OS that isn't as bad as Windows.
      I both agree and disagree. Even Mac, which has always been very locked down (post and pre OSX) has been far more secure than Windows, even without the need of things like an active firewall or antivirus. Windows in general is insecure because of how it was built. If it were open-sourced, it would still remain horribly insecure (granted, open-sourcing it would definitely improve its security).
      Remember, open-source security is a 2-way street: it may allow more people to spot vulnerabilities with more people to properly patch them, but it also allows criminals to find exploits with fewer obstacles. So, it really becomes a race between who finds the vulnerability first, and what to do about it. Furthermore, the more things you have open-sourced, the harder it is to pinpoint what a hacker has exploited.
      Of course, I'm well aware closed-source has its own slew of security issues, they're just simply different issues. When something is closed source, it is far more difficult to figure out exploits. The caveat is once a vulnerability has been exploited, it is often (but not always) more difficult to fix the problem and deploy it.
      The same could be happening with CPUs : at a time when speculative execution vulnerabilities (i.e.: CPU hardware vulnerabilities) keep increasing number almost as fast as Sharknado sequels, having an open core that many more researchers can analyse outside Intel's lab will certainly help.
      I'm glad you brought this up, because that actually partially supports my point. Despite some of these exploits being around for over a decade, only recently have they been discovered, and by a very skilled team. No black-hat hacker would have the time or resources to figure out what that team discovered, so in theory, these vulnerabilities could've gone un-detected indefinitely. Despite these discoveries, it still isn't clear specifically how to exploit them, which again, is thanks to things being closed-source.
      Of course, if these CPU architectures were open-sourced, these problems could've been spotted years ago and the community could've provided their own microcode fixes.
      So there's 2 ways to look at it:
      A. Keep things closed and hope nobody discovers how to take advantage of the problem.
      B. Temporarily expose the problem to everyone, but ensure a fix early on.

      It's a lot like how eggs are handled in various countries to prevent potential diseases when eating them - in some countries, the eggs are washed, but require refrigeration because the natural protective barrier is removed in the washing process. In other countries, the eggs aren't washed and don't require refrigeration. The washed eggs kill any pathogens that may have been on them, but without the protective barrier, any new pathogens could infect the eggs. The non-washed eggs may already have pathogens on them, but they can't penetrate the protective barrier. So, which method is better? The answer isn't so black and white, just like security when it comes to software source code.

      tl;dr, I'm not saying closed source is better, I'm just saying sometimes it can be.

      Comment


      • #33
        99% of the effort in designing the CPU is in the schematic and the chip layout. not the ISA. Most chips including x86 have a completely open ISA, so having an open ISA is already done and not an issue. Making ISAs is an extremely simple and straightforward thing to do, theres just not much value in it at all, and since there are so many open ISAs already exist, they are not really doing anything by creating yet another one.

        What really makes me question RISC-V and the logic behind it is why we need another ISA. Why not create an open source schematic and core design around one of the numerous existing ISAs such as ARM or SPARC, even x86? Another ISA means you have to have another toolchain, more compiler backends and yet another OS distribution with gigabytes of binaries just for this CPU. Is RISC-V really open, when they open the thing which is already open with most other CPUs, the ISA, but not the schematic, the chip core layout and so on? Sounds like a lot of hype and smoke and mirrors to me. RISC-V solves problems that don't exist and doesnt solve any unsolved problems, really, in fact it creates a new problem since we now need yet more binary blobs and builds just for this CPU. if you want to make an open source CPU, first support an existing ISA rather than require a huge effort to have to support yet another ISA.

        Comment


        • #34
          Originally posted by jpg44 View Post
          99% of the effort in designing the CPU is in the schematic and the chip layout. not the ISA. Most chips including x86 have a completely open ISA, so having an open ISA is already done and not an issue.
          ...
          What really makes me question RISC-V and the logic behind it is why we need another ISA. Why not create an open source schematic and core design around one of the numerous existing ISAs such as ARM or SPARC, even x86? Another ISA means you have to have another toolchain, more compiler backends and yet another OS distribution with gigabytes of binaries just for this CPU. Is RISC-V really open, when they open the thing which is already open with most other CPUs, the ISA, but not the schematic, the chip core layout and so on? Sounds like a lot of hype and smoke and mirrors to me. RISC-V solves problems that don't exist and doesnt solve any unsolved problems, really, in fact it creates a new problem since we now need yet more binary blobs and builds just for this CPU. if you want to make an open source CPU, first support an existing ISA rather than require a huge effort to have to support yet another ISA.
          x86 is NOT an open ISA. Ask NVIDIA. They tried to make an x86 compatible processor and got sued by Intel. AMD will easily grant you a license to x86-64 if you ask them. But then there are lots of patents on things like later versions of SSE or crypto instructions which still haven't expired. Good luck getting a license from Intel with that. AMD regularly has to negotiate agreements with Intel just so they can have a compatible processor. The only other vendor left which still had a ISA license was, I think, VIA. If their license hasn't expired yet. There are also a lot of patents on how to implement in hardware certain aspects of x86. To get around this, Transmeta had to use software emulation of x86, on a different hardware architecture.

          You can implement only the patent expired features of x86 but you would get something like a Pentium Pro processor.

          The reason for another architecture is quite simple. The already open hardware architectures like SPARC and POWER have issues. A lot of people don't like the SPARC architecture design in general. As for POWER it is mostly tied to one vendor, which controls the ISA design. RISC-V was an attempt to make an open alternative to ARM for server processors. The larger cloud server users (think Google or Facebook) write their own software and order millions of custom hardware units. This would allow them to customize their own CPU or SoC and avoid paying expensive royalties or license fees to someone else.
          Last edited by vasc; 25 June 2018, 01:28 PM.

          Comment


          • #35
            Originally posted by jpg44 View Post
            99% of the effort in designing the CPU is in the schematic and the chip layout. not the ISA. Most chips including x86 have a completely open ISA, so having an open ISA is already done and not an issue. Making ISAs is an extremely simple and straightforward thing to do, theres just not much value in it at all, and since there are so many open ISAs already exist, they are not really doing anything by creating yet another one.

            What really makes me question RISC-V and the logic behind it is why we need another ISA. Why not create an open source schematic and core design around one of the numerous existing ISAs such as ARM or SPARC, even x86? Another ISA means you have to have another toolchain, more compiler backends and yet another OS distribution with gigabytes of binaries just for this CPU. Is RISC-V really open, when they open the thing which is already open with most other CPUs, the ISA, but not the schematic, the chip core layout and so on? Sounds like a lot of hype and smoke and mirrors to me. RISC-V solves problems that don't exist and doesnt solve any unsolved problems, really, in fact it creates a new problem since we now need yet more binary blobs and builds just for this CPU. if you want to make an open source CPU, first support an existing ISA rather than require a huge effort to have to support yet another ISA.
            Apart from learning from past mistakes and that RISC-V will be very similar to MIPS for a compiler, so there is not that huge of a burden compared to a completely new ISA.
            x86 ISA (and most others) are documented, you can read most details for free. That does NOT mean you are free to implement the ISA on a chip however. x86 chips can only be created by Intel, AMD, and VIA as it is now (or by explicit contractors like Rockchip can create Chips for Intel). Might be that copyright and patents run out, but then you end up with designs that are outdated by decades and need separate support as-well.
            ARM and MIPS need similar licenses aswell, MIPS can be somewhat circumvented as AFAIK there are only a few instructions covered by patents that could nowadays be left out (would still be impossible to run generic MIPS code then). In short, none of the ISAs are "open" in the sense taht you are free to build a chip without some legal department knocking on your door.

            Further, as you rightly state, most of the magic is in the chip design itself, and open ISA or not, you can bet that as soon as RISV-V gets traction you will have proprietary designs with a much bigger research budgets than the open source variants. the ISA wont help you there.
            Still having an option of entirely-open Hardware makes alot more sense than, say complaining about firmware blobs for closed hardware which could just aswell have the same blob embedded.

            Comment


            • #36
              Originally posted by RSpliet View Post
              Turns out making the analogue side of a DRAM controller is much like performing witchcraft. SiFive hasn't had the time or manpower to do that proper. On the other hand, from the digital world it's a fairly dumb component, so the ROI of reinventing this wheel is minimal. Instead of making such a big investment they decided to allocate their resources on processor architecture and production, Unfortunately this means their hands are now tied to someone like Cadence for the DRAM controller.
              An understandable move which you expect to tick off the purest of purest GNU fanatics who don't like half-empty glasses. Most people though are impressed the glass is already half full.
              See, the trouble is that if you're willing to use a binary blob for DRAM init, you can just buy a $10 Chinese ARM board rather than the $1000 SiFive board and benefit from much better distro and software support too. In fact, because the $10 boards are so popular someone reverse-engineered the DRAM init for the most common ones and wrote a open source replacement, so you don't even have to put up with that anymore! SiFive are charging a huge premium for open hardware that is actually, for all practical purposes, less open than some of the really cheap re-purposed Chinese set top box chips.

              Comment


              • #37
                Originally posted by schmidtbag View Post
                I'm glad you brought this up, because that actually partially supports my point. Despite some of these exploits being around for over a decade, only recently have they been discovered, and by a very skilled team. No black-hat hacker would have the time or resources to figure out what that team discovered, so in theory, these vulnerabilities could've gone un-detected indefinitely.
                Well, nope. Not at all. The base concept isn't new. It's the attempt at making something out of it that are.

                The potential draw back of speculative execution have been known from the beginning. (CPU start processing something that maybe it shouldn't).
                But the answer was "but the CPU will throw away useless work ( it won't commit, i.e.: it wont write these results back into the registers) and that should be good enough ! (tm)".
                Remember the first mass-produced CPU to do that was the Intel Pro (and Pentium II and III which derive from it). That's very old.
                Nobody would even pay attention that pages got pre-cached.

                Fast forward to nowadays.
                CPU tech has incredibly advanced : they have bigger cache, longer pipe-line, they work orders of magnitude faster, and multitask way more.
                The hacking scene has evolved too. Trying to guess what other processes on the same machine are doing is actually something interesting.

                We've been through periods of really ccrazy stuff :
                hacker trying to guess what other process are doing just buy looking at time, timing cache access, etc.
                that needed to be counteracted with newer implementation of crypto libraries that run in constant time (execute always the same instruction in the same order. Only the content of the registers change depending on the pass, the execution flow is always the same, no password-dependent "if" conditions)
                We've even see bat shit insane hacker managing to guess the password being used base on the hum emitted by condenser, captured with a smartphone.

                Nowadays, timing memory access to see cache status is a plain normal thing for a hacker to do.

                It just that suddenly, lots of people realized that they could combine this with speculative execution ("after all, *only* the CPU work is thrown away, not caching" goes the light bulb).
                And there have been several people poking the concept around (Spectre v1: trying to read things past a conditional (such as a buffer lenght check) based on a mispredicted speculative execution and gather the results based on cache state, even if the actual result is never written on registers), and noticing that it work.

                It's not revolutionary, it's suddenly realizing that two ideas that have been laying around for quite some time can be combined into something greater together.

                If Google Project zero hadn't release it, somebody else was going to discover it.

                The hardwork come from two other things :
                - actually managing to make real world exploit out of Spectre v1. (Google Project zero code ex-filtrates kernel data using spectre v1)

                Now that the base concept is done, trying to see all the other places (beside a buffer length check) were specter could be abuse is where the majority of the hardware went.

                (It helps to know of Intel microcode bugs where two hyper-threaded process could accidentally corrupt each other's data)

                And thus comes the line of Meltdown (Intel will speculatively execute past... WTF? Memory protection ?!? you must be kidding), Spectre v2 (horribly contrived way to trick a few select Intel processors to speculatively jump to arbitrary positions of your choice in processes that don't even belong to you (like hypervisor) by confusing its branching predictor), spectre v4 (access memory data that the CPU hasn't yet realised that it's going to get over-written).

                Each time the genius is finding another non-obvious target for speculative execution, finding a way to exploit it (spectre v2's confusion of the predictor is the most elaborate in my opinion) and managing to write actual code that applies the exploit in a useful way (exfiltrating kernel or hypervisor data).



                But given what was known of speculative execution since decades and what is currently doable with cache, the initial discovery of spectre as a new class of exploit was found to happen any moment now (again: several different researched have started toying with the speculative execution and boundary checks).


                Which brings it to my point of view :
                So there's 2 ways to look at it:
                A. Keep things closed and hope nobody discovers how to take advantage of the problem.
                B. Temporarily expose the problem to everyone, but ensure a fix early on.

                {...}

                tl;dr, I'm not saying closed source is better, I'm just saying sometimes it can be.
                exploit are going to get discovered eventually. No matter how much secrecy you put into it (see the DRM of modern gaming console getting eventually broken).

                At that point trying to find a project that has lots of public eyeballs staring at it is the best strategy (i.e.: a project that is open source because it helps with the poking, and a project that is popular which means lots of eyeballs, lot of resources available to encourage the poking).

                Unless you're a small low profile project, counting on secrecy is never a good option.

                If you are very small and very low profile (say your own home-grown personal server), chances are, nobody will bother trying to break your thing. not worth the effort. (it will only open the possibilities for script kiddies to attack 1 single target: your server).

                In any larger case, if you're a high profile target, secrecy won't save you (again, DRM).

                Whereas opensource at least make it possible for people to try to fix bugs that they've discovered with tools such as AFL.

                Comment


                • #38
                  Originally posted by makomk View Post

                  See, the trouble is that if you're willing to use a binary blob for DRAM init, you can just buy a $10 Chinese ARM board rather than the $1000 SiFive board and benefit from much better distro and software support too. In fact, because the $10 boards are so popular someone reverse-engineered the DRAM init for the most common ones and wrote a open source replacement, so you don't even have to put up with that anymore! SiFive are charging a huge premium for open hardware that is actually, for all practical purposes, less open than some of the really cheap re-purposed Chinese set top box chips.
                  Bingo! And that ARM chip probably performs better, uses less power, and has an entire software ecosystem already built for it, plus multiple sources for similar chips at a very low cost. When you also add in the fact that it has more open firmware than current RISC-V chips, RISC-V doesn't look very good at all right now as a general purpose computer / SBC.

                  Of course if you're Google, Microsoft, etc. and fabbing your own custom chips for specific tasks and roles, RISC-V looks great. We even use RISC in FPGAs as soft cores here, it's just not something we'd ever expect to actually compete with available x86, ARM, or POWER silicon.

                  Comment


                  • #39
                    Originally posted by makomk View Post
                    See, the trouble is that if you're willing to use a binary blob for DRAM init, you can just buy a $10 Chinese ARM board rather than the $1000 SiFive board and benefit from much better distro and software support too.
                    Thank you for this wonderful demonstration of a classic slippery slope fallacy! If I may: ARM cores use binary blobs for the TrustZone mechanism. Other binary blobs on the SoC include those fed to the GPU, wifi, DSP and the modem/sound components. These blobs aren't all equal. Some of them have code running persistently, others just convey a bit of initialisation data. Some run on cores with access to other system components, others are well isolated. There's various degrees of evil here. You can't just claim "if you're willing to accept one, why not swallow the whole lot" and be done with it. Besides...
                    Let's look at this from the SiFive perspective. They have a handful of hardware engineers that have to get a job done. Their job: an SoC featuring (open) RISC-V cpu cores. Delivered before the RISC-V movement loses momentum and people forget about it. Oh and while the ISA spec is still in motion. Surely that means they have to pick their battles, Rome wasn't built in a day. Nor was Linux for that matter, took a long time before you could run a Linux system without any proprietary drivers and software. So they take an iterative approach at open hardware, starting at the core.

                    Look, I get it. For a consumer those HiFive boards are not competitively priced. The Rocket core's performance is equivalent to a wimpy ARM Cortex-A5 on a good day, a Cortex M3 on a bad day. And for all that money you're still not buying "100% open", but rather "20% more open". That's not what the swelling community was hoping for when those Berkeley guys first announced RISC-V, but turns out this is a much tougher nut to crack than many might think. I think though that all this is as much about expectations as it is about achievements. All that I suggest is that rather than speaking so negatively about the whole RISC-V ecosystem because one party isn't meeting your expectations yet, let's praise them for what they did achieve and encourage them to keep pushing forward iteratively until we're truly rid of all those closed source firmwares. And if the achievements of today are too little for you to justify buying the actual board, then do just like me and don't buy it.
                    Last edited by RSpliet; 26 June 2018, 10:15 AM.

                    Comment


                    • #40
                      Originally posted by brrrrttttt View Post
                      And what do they do between those states?
                      There aren't clocks between them, so you don't know. Digital is discrete in both time domain and amplitude. With sampling at finite intervals (which must be no shorter than the time it takes the signal to settle), you can make the world look digital.
                      Last edited by coder; 26 June 2018, 03:30 AM.

                      Comment

                      Working...
                      X