Announcement

Collapse
No announcement yet.

Weekend Discussion: How Concerned Are You If Your CPU Is Completely Open?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by azdaha View Post

    For what it's worth, I don't see it as mere hatred against IBM or POWER. Jon Masters, apparently, considers the POWER architecture to be on life-support at this point. I don't know enough about the situation to claim that he is wrong. What I do know is that it's difficult, if not impossible, to find a viable POWER system for purchase. In fact, personally, I had completely forgotten about it entirely until the announcements about Spectre and Meltdown vulnerabilities. However, AMD was able to benefit from this, as it created an opportunity to differentiate itself from Intel at a time when they were not impacted by Meltdown and by shipping a new line of CPUs and GPUs. The fact that IBM did not take advantage of this phenomenon can be seen as further proof that POWER architecture is on the way out, sadly--following in the footsteps of IBM's former consumer line of products (thinkpads).
    Lest anyone forget, those AMD CPUs come with a mandatory PSP, so anyone thinking they're gaining significant security by going AMD vs. Intel probably doesn't have the whole picture.

    Should IBM have seized the opportunity with a significant PR blitz at the time? Probably, no make that most definitely. It's not in IBM's nature to do so however.

    I fully expect the opening of the ISA to do good things for POWER, and do not consider it on life support at this point. Had the ISA not been opened, and had Blackbird etc. not happened, I would have had to (sadly) agree and get on the RISC-V bandwagon for owner controlled CPUs (putting up with the performance loss and trying to figure out just how a desktop-class CPU could be funded from scratch, given the current state of things with most shipping RISC-V CPUs being both locked and embedded in other products). However, that's not what happened, and I think if we can just get past various vendors pretending to have open firmware when they don't, and confusing the average consumer as a result, the future is bright for both POWER and RISC-V.

    To be clear, though, I don't see x86 or ARM going anywhere. They are well suited for mass entertainment (e.g. the primary purpose of a cell phone is less communication and more various forms of consumption these days, whether it's direct video, audio, games, or various <X>aaS apps, same for laptops) , they have the controls Hollywood and Silicon Valley think they need to lease out that content / restrict access to their app platforms, and they will probably far outsell any workstation / server class devices in sheer numbers as a result. However, I do note that television sets vastly outsold computers in a time not so recent, so pure volume is not the metric I'd choose to use across the two very different classes of device.
    Last edited by madscientist159; 24 February 2020, 01:25 AM.

    Comment


    • #72
      Originally posted by Space Heater View Post
      Spectre affects essentially every processor with speculative execution. Also IBM's POWER 7, 8 and 9 were affected by Meltdown, it's not an Intel-specific problem.

      https://www.ibm.com/blogs/psirt/pote...-power-family/
      Also AMD was affected too. Like you said, basically all modern processors, but I feel I need to mention this to dispel the persistent rumor that AMD designed a perfectly secure chip. The PSP alone makes the machine untrustworthy to its physical owner, but for those that don't care about machine control Spectre is still a concern.

      Comment


      • #73
        Originally posted by wizard69 View Post
        That is exactly what I was saying.
        Is it?

        Originally posted by wizard69 View Post
        The fact that it takes these people so long to fix Intel vulnerabilities highlight the futility of having somebody with zero time in the trenches doing anything significant with the design.
        I asked you to explain what would be a reasonable amount of time, which you seem to have completely ignored.

        For your information:
        It typically takes anywhere from 3 to 5 years for a new chip to go from the drawing board to the market.
        Source: https://www.techspot.com/article/184...-built-part-3/

        Antony: How long does it take to design and manufacturer a processor and what is involved?

        Ophir: The process takes about four years.
        Source: https://www.forbes.com/sites/antonyl.../#5d81f1604d1c

        From CPU tape out to shipping takes a year
        Source: https://www.fudzilla.com/news/proces...g-takes-a-year

        More: https://www.quora.com/How-long-does-...p-from-scratch

        Do you get it? From the time they learn about a vulnerability, it takes them at least a year before they can possibly have a processor on the market with a mitigation. This is not a reflection of how hard the bugs are to find, it's a reflection of how long and involved the design process of a modern CPU is. Therefore, you cannot infer from that, that the bugs are nearly impossible to find. When you try to make such judgements, on the basis of so little information, you tend to reach bad conclusions.

        Originally posted by wizard69 View Post
        I really don't see the point. The number of people that can do something useful with a hardware description, like Intels current processors, is so thin as to not even make sense to argue the value in opening hardware.
        Did you ever consider asking one of these security researchers how useful it would be for them to have the source code, before deciding on your own that it's not? What about going through the bug database of some open source core designs and asking those who filed bugs on security issues how useful it was for them to have the source? Or, maybe you could track down some current or ex- CPU designers and testers to ask them how useful source access is to finding bugs?

        This willingness to take & double-down on positions, shrouded within a veil of ignorance, is pretty sad. Like azdaha , I'm embarrassed for you.

        Originally posted by wizard69 View Post
        What I don't buy is that open hardware has any value to the population in general, the reason is the massive wall one would have to breach to do anything constructive with a hardware description.
        Why isn't the same true of the Linux kernel?

        Originally posted by wizard69 View Post
        Beyond that what does one do i they should by chance find a bug in a piece of hardware, most of us can't spin our own.
        1. report it.
        2. devise better or more efficient microcode or software mitigations, based on being able to understand exactly what the problem is -- not merely a black box characterization of it.
        3. if they're a big customer -- maybe one who already has their own fork of it -- maybe they do spin their own.

        And, by the way, you completely glossed over the whole issue of microcode. But, by this point, that's hardly surprising.

        Comment


        • #74
          Originally posted by Qaridarium View Post
          what Vulkan did to GPUs will the bytecode Webassembly do to the CPU-ISA war.
          Huh? What did Vulkan do to GPUs?

          Originally posted by Qaridarium View Post
          in near future all software will use Webassembly as a compatibility layer to make it compatible to all CPU ISAs on planet earth.
          Somehow, I don't see most server apps doing this. I wouldn't run my entire compiler tool chain, my database server, or my NFS server in a web browser or headless equivalent.

          Originally posted by Qaridarium View Post
          no one cares about the CPU ISA anymore.
          They do matter, and they matter for reasons like power efficiency and security, if not also performance.

          The success of x86 CPUs just shows how vast sums of money can eek a bit more life out of a bad ISA. Eventually, like a dog reaching the end of its leash, the limitations inherent to x86 will catch up to it and it will finally be surpassed by better ISAs.

          Comment


          • #75
            Originally posted by Space Heater View Post
            Spectre affects essentially every processor with speculative execution. Also IBM's POWER 7, 8 and 9 were affected by Meltdown, it's not an Intel-specific problem.

            https://www.ibm.com/blogs/psirt/pote...-power-family/
            Indeed, Spectre is not Intel-specific. I was referring to Meltdown specifically. However, I had not realized (or forgot) that some of the POWER series CPUs were affected as well. Thanks for the clarification.

            Comment


            • #76
              Originally posted by coder View Post
              Huh? What did Vulkan do to GPUs?
              Somehow, I don't see most server apps doing this. I wouldn't run my entire compiler tool chain, my database server, or my NFS server in a web browser or headless equivalent.
              They do matter, and they matter for reasons like power efficiency and security, if not also performance.
              The success of x86 CPUs just shows how vast sums of money can eek a bit more life out of a bad ISA. Eventually, like a dog reaching the end of its leash, the limitations inherent to x86 will catch up to it and it will finally be surpassed by better ISAs.
              so you think Bytecode Webassembly is only a technology for browser apps ?
              at this point you are wrong. at the beginning of the development of this technology they started as web browser technology only yes
              but thats "History" because today aim for this technology is the user-case outside of the browser.

              in the end you will be able to develop any kind of app even server apps or databases by using bytecode webassembly technology.

              thill will end the ISA war because it will run on any cpu who has a bytecode webassembly layer/engine.
              Phantom circuit Sequence Reducer Dyslexia

              Comment


              • #77
                Originally posted by Qaridarium View Post
                so you think Bytecode Webassembly is only a technology for browser apps ?
                at this point you are wrong. at the beginning of the development of this technology they started as web browser technology only yes
                but thats "History" because today aim for this technology is the user-case outside of the browser.
                I also allowed for a "headless equivalent". Did you not see that?

                Originally posted by Qaridarium View Post
                in the end you will be able to develop any kind of app even server apps or databases by using bytecode webassembly technology.

                thill will end the ISA war because it will run on any cpu who has a bytecode webassembly layer/engine.
                I don't really get what you think is so different or better about web assembly, for non-browser use, than Java Bytecode or Microsoft's Common Language Infrastructure (introduced with C#). In the 25 (and 20) years they've been around, those didn't replace natively-compiled code. They did make a dent, but so did the raft of interpreted languages and people using Node.js (which is often used with a JIT compiler).

                Sure, web assembly is happening, and that's a good thing. I just don't see it taking ISA considerations out of the equation. Nor is its use outside the browser anything that new or revolutionary.

                Comment


                • #78
                  Originally posted by coder View Post
                  I also allowed for a "headless equivalent". Did you not see that?
                  I don't really get what you think is so different or better about web assembly, for non-browser use, than Java Bytecode or Microsoft's Common Language Infrastructure (introduced with C#). In the 25 (and 20) years they've been around, those didn't replace natively-compiled code. They did make a dent, but so did the raft of interpreted languages and people using Node.js (which is often used with a JIT compiler).
                  Sure, web assembly is happening, and that's a good thing. I just don't see it taking ISA considerations out of the equation. Nor is its use outside the browser anything that new or revolutionary.
                  I found the error: "Microsoft" and "Oracle"

                  do you really think that anything what comes from this evil companies has any useful purpose?

                  also the biggest difference is what is the different between DirectX11 and Vulkan... one is a high-level API the other is a LOW-Level API.

                  same with Java vs webassembly... Java is a high-level Bytecode instead webassembly is low-level-bytecode.

                  this means java was designed to be as slow as possible and webassembly is deigned to have NATIVE speed(Like native ISA).
                  Phantom circuit Sequence Reducer Dyslexia

                  Comment


                  • #79
                    Originally posted by ldesnogu View Post
                    How exactly will you ensure the built CPU has no back door? Just like open source software you don't compile yourself could have back doors, but it's almost impossible to find back doors in a HW CPU; I'm not even sure you can fully reverse engineer a multi layer CPU.
                    even if you have what is termed an "open" CPU you still have both correctness to prove as well as layout and tapeout as opportunities for compromise. then, do you trust the *designers* when they say "oh yeees, we made a trustable chip, you can totally trust us on that" (yes there are companies that genuinely state this)

                    err no.

                    independent analysis. formal proofs that are also libre / open. this is how you get even remotely close to a trustable design. it is the approach we are taking with LibreSOC.

                    for tapeout, a different approach is needed, based on reputation. do you think that a Foundry would be happy to run a compromised design, knowingly or unknowingly? Foundry being a multi billion dollar business, that is. what would happen to them if they were caught? how long do you think they'd stay in business? what do you rate the probability of a Foundry allowing that to happen, even once?

                    Comment


                    • #80
                      Originally posted by tildearrow View Post
                      I am pretty sure I know who is greatly concerned. It's uid.


                      Wait wait wait, "is POWER dead"? Really? I do not think it is...
                      of course not. IBM, Freescale/NXP etc, they just do not make a huge lot of noise. Freescale has been doing both embedded POWER ISA MCUs and multi core ghz 64 bit CPUs for years.

                      debian has had powerpc packages for two decades.

                      it's a strong *stable* long term business for a lot of companies and it hasn't gone away. it's just that because it's not quotes new quotes people forget it exists.


                      Comment

                      Working...
                      X