Announcement

Collapse
No announcement yet.

Weekend Discussion: How Concerned Are You If Your CPU Is Completely Open?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by Terrablit View Post

    However, with the way most systems are built, it's not usually the CPU that's the weakness. It's the firmware, security processor, network chips, and base install configuration.
    yes. this is why we are designing Libre-SOC to be "unbrickable" and entirely without DRM. "unbrickable" means that externally facing interfaces that are 100% likely to be exposed to users may act, without fail, at power on, as startup boot interfaces (uboot SPL mode).

    therefore, unless the OEM is spectacularly stupid or devious, with no DRM at the boot level the enduser is always given the option to boot an OS of their choice, no matter what the OEM put on the device as sold.


    Having an open CPU is something, but I'd much rather force manufacturers to allow consumer replacement of firmware and to provide firmware sources for their systems. It'd also convince more manufacturers to cooperate with Linux if they already know that their source tree will be visible from the start.
    it starts with the CPU and continues with the CPU designer acting responsibly (like Freescale, Texas Instruments and to sone extent Samsung) providing FULL BSPs with FULL source.

    expecting OEMs to release source even under legal obligation of Copyright Law is unrealistic. AMLogic, Allwinner.

    by making the CPU unbrickable and providing full source right from the start, even if legislation is hopelessly inadequate at least the enduser stands a chance of not wasting years on reverse engineering or being critically dependent on the xda developer community and people like myself who have reverse engineering skills.

    the reason why i started Libre-SOC is because i realised, after buying NINE HTC smartphones back in 2003 that i could either spend years of my life *on each processor and each product released by an OEM* on reverse engineering to "fix" this stupid situation

    or

    i could start a Libre processor and fix it for good, and stop the criminal waste of Libre Engineers time and exploitation of their good will.

    Last edited by lkcl; 24 February 2020, 05:59 AM.

    Comment


    • #82
      Originally posted by mulenmar View Post
      .

      Oh, right, we need royalty free open SATA/whatever controllers, memory chip designs, USB2 controllers, and all that.

      EnjoyDigital LiteSATA, LiteDRAM, LiteETH, Lite PCIe.

      that takes care of the Controller side. the PHYs are all analog, and therefore the layout has to be done per geometry per Foundry.

      Comment


      • #83
        Originally posted by bachchain View Post
        Concern is probably the wrong word. The world has been built on proprietary processors for decades, and nothing catastrophic has happened yet. So no, I'm not concerned.
        The growing list of hardware security faults in mainstream CPUs a catastrophe (announced to vendors directly on 1 June 2017).

        A hardware fault so big that it needed to be fixed in every part of software in every operating system. Kernels, compilers, daemons, virtualization, runtime environments, web browsers you name it... Everything needed to change and it needed to change suddenly. Nobody knows what the amount of setback and unnecessary work it has caused. Nobody is announcing the total cost in damage because they have their tails between their legs.

        > Certainly it's a nasty hack, but hey the world was on fire and in the
        > end we didn't have to just turn the datacentres off and go back to goat
        > farming, so it's not all bad.

        It's not that it's a nasty hack. It's much worse than that.
        As it is, the patches are COMPLETE AND UTTER GARBAGE.

        They do literally insane things. They do things that do not make
        sense. That makes all your arguments questionable and suspicious. The
        patches do things that are not sane.

        WHAT THE F*CK IS GOING ON?
        If this is how the public discussions went can you imagine the ones behind closed doors?

        Honest question: If not what I mentioned, how bad does it need to get before you would call something a catastrophe?

        Comment


        • #84
          Originally posted by wizard69 View Post
          Now that doesn't mean open hardware is bad. It is certainly a good place to train the next round of engineers. What I don't buy is that open hardware has any value to the population in general, the reason is the massive wall one would have to breach to do anything constructive with a hardware description.
          So would you think the next round of engineers have no value to the population in general ? Sometimes you have to behave so that the society improves. Looking only at your immediate profit is shortsighted. Those engineers in the next round won't possibly be able to buy open hardware if it is only they who buy it, because of economy of scale. And the more knowledge there is available to all about something the more likely that others, beside the vendor, are able to do good things with it. It doesn't need to be another design that one has to etch, it could also be insight on how better tune a driver, or what tasks to allocate to each model of open hardware... Or it could be simply a less monopolistic market in which one open hardware vendor reuses designs from another open hardware vendor, with less costs, lower prices for consumers and a little more compatibility. I don't know.

          In the end the more open hardware is the less used I feel by the hardware, and the less used I feel by the hardware, the more I use the hardware.

          Comment


          • #85
            Originally posted by Jabberwocky View Post
            The growing list of hardware security faults in mainstream CPUs a catastrophe (announced to vendors directly on 1 June 2017).
            A hardware fault so big that it needed to be fixed in every part of software in every operating system. Kernels, compilers, daemons, virtualization, runtime environments, web browsers you name it... Everything needed to change and it needed to change suddenly. Nobody knows what the amount of setback and unnecessary work it has caused. Nobody is announcing the total cost in damage because they have their tails between their legs.
            If this is how the public discussions went can you imagine the ones behind closed doors?
            Honest question: If not what I mentioned, how bad does it need to get before you would call something a catastrophe?
            Yes it is Officially that these anti-FLOSS anti-Open-source-hardware anti-openPOWER and so one are "INSANE"

            it is insane how this people abuse this forum into their pro-intel-closed-ISA and pro-closed-source-Nvidia agenda and this results in the first 4 pages are complete bullshit.

            and no one of this shitheats talk about stuff like Webassembly bytecode who in fact would end the Intel-ISA war.

            they call IBM/power a death horse but why they just need to write a Webassembly bytecode layer to be compatible to any modern app.

            Intel lost the ISA war because AMD with the inferior ISA beat the shit out of intel with a superior and more modern ISA.

            this means in a modern cpu battle the ISA is no longer the key feature to win benchmarks you can win with inferior ISA against superior ISA.

            what most people do not see because they compare IBM-14nm cpus to AMD 7nm cpus is that the IBM-POWER ISA is the most advanced and fastest ISA in the world. IBM-ISA only lag a die shring from 14nm to 7nm thats all. the POWER-ISA alone without the 14nm vs 7nm is the fastest isa in the world.

            this shows how stupid this people are who call the POWER ISA a death horse. they just do not know what are they talking about

            their education is ZERO their unterstanding is ZERO their intelligence is ZERO...

            but i am happy thank to Webassembly the ISA war is over and Intel Lost the war.
            Phantom circuit Sequence Reducer Dyslexia

            Comment


            • #86
              As Intel's continuing vulnerability saga shows, auditability at the microcode level would have genuine benefits. It would also allow existing CPU hardware to be repurposed for specialized applications without any additional R&D or investment from the vendor, thus increasing the overall value of the product. (It would rule out artificial segmentation though.) That being said, yes, UEFI and management engine are the two biggest priorities.

              As for POWER itself, I fear rumors of its death may be well founded. Despite being overall enthusiastic about the concept, there are some things that have stopped me from buying a Raptor workstation so far. I would pay extra for an open-hardware workstation, but paying this much extra for
              • Performance that isn't up to par with x86 processors costing far, far less
              • Power consumption that would make me feel guilty buying it, especially idle power consumption
              • Lack of good-quality low-noise coolers for the CPU socket (maybe I could improvise something, but that's just speculation)

              has just never added up for me.

              Note that x86 compatibility was not on that list! The only closed-source software I run periodically is games, and I could keep a separate computer for that. Everything else is either portable or could be made portable if needed. Relatively few of my workloads have super-optimized AVX code paths either. I'm actually not tied that closely to the x86 instruction set.

              It's too bad, I want to support Raptor because I think their goals are worthy. But I'm not going to make such a big purchase out of nothing but charity. I think a next-gen POWER CPU that can go toe-to-toe with Zen 2 or Zen 3 would go a long way toward making a high-end POWER workstation anything other than an oxymoron. IBM definitely has the resources to build something like that if it made it a priority.

              EDIT: maybe the cost looks better against Xeon-W, but Xeon-W has brand momentum and inertia keeping it alive. From a performance standpoint, it's also totally uncompetitive and not something you want to base your price/performance curve off of.

              EDIT: I have been corrected about idle power consumption. See later posts.
              Last edited by MaxToTheMax; 25 February 2020, 10:22 PM.

              Comment


              • #87
                My Blackbird idled at 55W, where my Phenom 2 box idles at 50W. According to Guru3d, a 16-core Threadripper 1920x idles at 93W. Slightly apples and oranges, but I don't see how it's bad. For performance, arch optimizations have a great effect in lame/flac/etc, in compiling loads Powers usually beat the comparable 14nm x86 procs phoronix.

                Comment


                • #88
                  Originally posted by torsionbar28 View Post
                  Basically, that's no one. Outside of foreign intelligence agencies anyways. The industry trend is to not give a crap about hardware, and move all your stuff to the cloud. I.e. someone else's hardware. The fact is, nobody gives a crap about this "open CPU" talk anymore except geeks and academics. I'm not saying the arguments aren't valid, I'm just saying the whole Cloud trend has rendered the argument obsolete for essentially every corporation in the world. Without a customer base for this stuff, it will not be taking off any time soon.
                  That's the point. China cares because they made it a national goal to get their own chips and supply line. Part from licensing AMD tech, part from having a local company partner with VIA, part from local silicon research companies, and part from rampant IP theft. It's something they've already spent billions on. The US cares somewhat because of all the Huawei mess, but they already have a U.S.-based chip supply. Russia probably cares, and might do something about it when they finish annexing their neighbors. U.K.? Probably not, as they're still imploding from Brexit. Most of the less-influential countries don't have the capital to care about it. But I'm sure the "can we really trust foreign electronics" line comes up in meetings.

                  Most major corporations won't care because building your own tech doesn't offer enough ROI. And most of the confidential data is user data, which gets stolen every few years anyway. A few like to design their own chips and hardware, and having open specs makes it easier to start. But, honestly, it's pretty much nation-states, and among them, China is leading the charge on not wanting any reliance on foreign infrastructure. Because of all this, it may be that for underdogs and new contenders in the CPU game, open specs help them compete internationally in an era of minimal trust.

                  Currently POWER and RISC-V are enough for the paranoid to use. But opening everything beyond the CPU has real consumer value.

                  Comment


                  • #89
                    As a data point my Blackbird w/ v2 4-core CPU idles at ~30W. Newer kernels + the v2 chip really do reduce that idle power use.

                    Comment


                    • #90
                      Originally posted by Qaridarium View Post
                      I found the error: "Microsoft" and "Oracle"
                      do you really think that anything what comes from this evil companies has any useful purpose?
                      I'll Ignore the tin foil hat conspiracy nonsense, beyond pointing out that Java came from Sun and both the OpenJDK and .NET Core are actually pretty safe to use. Besides, the real fun comes next.

                      Originally posted by Qaridarium View Post
                      also the biggest difference is what is the different between DirectX11 and Vulkan... one is a high-level API the other is a LOW-Level API.
                      same with Java vs webassembly... Java is a high-level Bytecode instead webassembly is low-level-bytecode.
                      this means java was designed to be as slow as possible and webassembly is deigned to have NATIVE speed(Like native ISA).
                      Wat.

                      Where on earth did you get the idea that intermediate byte code in a VM can ever offer native speeds? Have you even used either of these? Your arbitrary distinction between high- and low-level bytecode is just bad and wrong, man. To give you a simile, it's like you wrote a vampire romance fanfiction but decided to make them glitter and brood over unattractive teens instead of suck blood. The constraints of the web don't allow "low-level" bytecode. And, while WASM is usable outside the web, it was still designed to be used on the web. Which means it's got built-in security constraints. In short, it's not "lower-level" than JVM or CLR. It's simpler. And things like this don't stay simple for long.

                      You vastly underestimate the scope of writing a VM for portable code designed for security sandboxing and interaction with Javascript. Some operations and benchmarks will get close to native speeds because there's very little to sandbox or rewrite. The same thing happened with the JVM and CLR. But others will clearly show the difference that always exists between native code and any type of intermediate representation (IR). In many cases the performance sacrifice is worth it. But it's never worth pretending it doesn't exist. So please don't bullshit. It's our responsibility as adults to make sure that uninformed people don't see this nonsense and think it's true.

                      EDIT: Hell, In my excitement I even forgot to point out that the Vulkan vs DirectX11 comparison is wrong. Vulkan is most equal to DirectX12 and not the previous generation, and DirectX12 offers pretty much the same amount of fine-grained control as Vulkan. It's just got baggage for interop with previous APIs and is locked to Windows. It also has non-graphics stuff in the API. It seems you get this stuff wrong a lot. Please never talk about whether an API is high- or low-level ever again.
                      Last edited by Terrablit; 24 February 2020, 02:40 PM.

                      Comment

                      Working...
                      X