No announcement yet.

Future Intel Systems To Reportedly Be Even Less Friendly For Open-Source Firmware

  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    To provide some thoughts about the original post: From a firmware developer's perspective, Intel moving further away from free firmware is pretty much expected. I believe it's actually part of their overall open-source strategy. One has to consider the whole software eco-system to see this: Intel is supporting Linux nowadays which is quite nice of course. However AFAICS, they only support it as open-source not as free software. Every time they want to support new silicon in Linux, they can decide if they will open source it or if they will put it into a firmware blob and hide the details from Linux developers. Of course, for this they need a proprietary firmware.

    The issue is not very visible on consumer platforms where Intel doesn't want to hide things that happen at OS runtime. However, on Intel-based servers for instance, there are advanced features that they don't contribute as open-source. RAS comes to mind, that's Reliability, Availability and Serviceability. A set of features that is close to hardware parts that they traditionally keep under wraps (e.g. memory controller). So they put such OS features into SMM blobs instead of the kernel. I once joked that without blobs, Linux on Intel-based servers must be unreliable, unavailable and unmaintainable Actually, I don't know how well it would work without blobs, I don't work with servers, so please take this information with a grain of salt.

    IMHO, this is an eco-system problem and we can't hope silicon vendors who profit from the status quo to fix this. And it doesn't matter if they produce RISC-V, ARM, OpenPower or x86, to some degree these issues exist everywhere. I don't want to blame OS developers, but there are some unsettling oddities in that direction. For instance, Linux-libre is practically rewarding proprietary firmware: They drop code for silicon where the vendor is honest about the blob situation and keep code for silicon where the vendor hides blobs in the firmware. I don't know how to fix this, free operating systems always have to compromise to support hardware that isn't designated for free software. I hope Linux will at some point have enough leverage and use it to abandon OS blobs hidden in firmware.


    • #22
      Originally posted by darkbasic View Post
      Any news on the AMD front in this regard?
      AMD is supporting coreboot once more (with FSP blobs too atm., the future direction is unknown to me). However, it's the same as everywhere else with AMD. They provide so little documentation about their hardware that virtually nobody but AMD employees can provide or maintain open-source code.

      I'd say if a developer knows what they are up to, it's ok to buy undocumented hardware. However, an end-user who wants other people to write open-source support for their hardware shouldn't buy a CPU without a proper datasheet. If you bought one or consider buying one, please ask AMD about public datasheets, at least at the level Intel provides.


      • #23
        Originally posted by PublicNuisance View Post

        Well AMD is using the Pluton security processor made by Microsoft on it's newer chips so both Intel and AMD are going in terrible directions in their own way.
        No, no, no. Don't forget that AMD is always the holy grail of the open source within the FOSS community (only RISC-V stands above the holy grail), so don't you dare say anything bad about it!

        (I'm an AMD user too, since December, btw)


        • #24
          Perfect opportunity for AMD to finally follow through with their announced plans from years ago to open up PSP and their firmware...


          • #25
            Originally posted by Termy View Post
            Perfect opportunity for AMD to finally follow through with their announced plans from years ago to open up PSP and their firmware...
            I think both Intel and AMD find it difficult to 'open up', not least because the American authorities like to have back doors. My personal feeling is that it will take a non-aligned country to start making open hardware cpus, even on old processes, before even slightly trustworthy hardware will become available to the general public.

            There is progress. The ecosystem around RISC-V is that many people are developing open tools to allow fabrication: for example, using the PicoRV32 design; so it is not just that the ISA is royalty-free, but the ecosystem is slowly and gradually extending to avoid the use of non-free tools. Building a SoC that uses somebody else's proprietary modules as building blocks is not the destination, but a step on the way.

            At some point a sufficiently capable government will realise that trusting supposedly secret communications to foreign manufactured chips is a silly idea, and if you really want to keep things secret, you need to manufacture your own from the ground up. Make them open hardware so the world is your tester, and sell them to all-comers. If it becomes illegal for someone from a 5-6-9-14-eyes country to buy an encryptor from say, Brazil, you'll know they are doing something right.


            • #26
              Originally posted by billyswong View Post

              But the direction towards RAM-with-firmware is unlikely to change. Outside IBM, companies are developing CXL. Eventually all RAM sticks will move to serial interface. Just like how all SSD and HDD contain binary blobs, so will all the RAM sticks on board.
              I expect IBM's version of the bridge chip to have open firmware, just as they provided open-source memory init code for POWER9. It's literally code doing the same job, just in a different location.


              • #27
                Originally posted by onlyLinuxLuvUBack View Post
                I think an interesting phoronix article could be:

                Let's say intel goes evil-er , what if I wanted to use a steam deck in an old desktop case with attached usb-c hub with tv connected display, usb mouse/key, usb converted sata hdd, Say I leave it steam os and run developer mode and then i run distrobox and install ubuntu lts server with added xfce-desktop,

                what is the desktop performance feel like and phoronix performance measurements/benchmark of the added ubuntu against a current intel desktop ?

                steam deck converted home desktop vs intel tower desktop perf cmp ?

                mostly install the ubuntu on the usb/sata/hdd if possible to specify with distrobox.
                ...what? Steam deck is a 45W PC. Why not just get an APU in a desktop form factor instead of comparing a handheld PC to a desktop?

                Plus you don't have to do any weird things to boot other OSes on the Steam Deck, it's just a PC that ships with Arch Linux on some funky repositories.


                • #28
                  Originally posted by phoronix View Post
                  Phoronix: Future Intel Systems To Reportedly Be Even Less Friendly For Open-Source Firmware

                  According to the Coreboot camp, future Intel systems with FSP 3.0 and Universal Scalable Firmware (USF) will be even less friendly for open-source system firmware...

                  The whole point for the last 23 years was to lock us out of our PC's, the people on this site are oblivious. If you've bought any "MMO" or any client server PC game in the last 23+ years you're already too clueless about the agenda.

                  In the mid 90's the tech industry woke up to the fact that the average PC user was an irrational oblivious idiot about basic facts about networked computing. Two or more computers in a network become and behave as a single device.

                  So that means you NEVER want a client-server executable or else you lose control of your PC. In 1997 when ultima online came out the business community was jealous of what Richard garriot pulled off with ultima online, they literally got the public to pay to steal software from themselves.

                  The "Massively multipalyer" moniker was a bullshit invention to defraud the population out of game ownership and jack up PC game prices. As someone who gamed in the 90's I was expecting the future of PC gaming to be local apps for ever the traditional singleplayer campaign+multiplayer inside the same box forever, I'd never imagine the public would be dumb enough to buy a piece of software it didn't own nor control, because that's the same thing as stealing software from yourself.

                  That's what happend with ultima online, everquest, lineage, and guild wars 1.

                  Those games are just PC rpgs with their networking code ripped out into a seperate exe and sold back to you at inflated prices. We already had infinite multiplayer inside quake 2 engine in 97 and it didn't require giving up game ownership, nor user names or login accounts.

                  Go have a look--

                  4. How does TC work?

                  TC provides for a monitoring and reporting component to be mounted in future PCs. The preferred implementation in the first phase of TC emphasised the role of a `Fritz' chip - a smartcard chip or dongle soldered to the motherboard. The current version has five components - the Fritz chip, a `curtained memory' feature in the CPU, a security kernel in the operating system (the `Nexus' in Microsoft language), a security kernel in each TC application (the `NCA' in Microsoft-speak) and a back-end infrastructure of online security servers maintained by hardware and software vendors to tie the whole thing together.

                  The initial version of TC had Fritz supervising the boot process, so that the PC ended up in a predictable state, with known hardware and software. The current version has Fritz as a passive monitoring component that stores the hash of the machine state on start-up. This hash is computed using details of the hardware (audio card, video card etc) and the software (O/S, drivers, etc). If the machine ends up in the approved state, Fritz will make available to the operating system the cryptographic keys needed to decrypt TC applications and data. If it ends up in the wrong state, the hash will be wrong and Fritz won't release the right key. The machine may still be able to run non-TC apps and access non-TC data, but protected material will be unavailable.

                  The operating system security kernel (the `Nexus') bridges the gap between the Fritz chip and the application security components (the `NCAs'). It checks that the hardware components are on the TCG approved list, that the software components have been signed, and that none of them has a serial number that has been revoked. If there are significant changes to the PC's configuration, the machine must go online to be re-certified: the operating system manages this. The result is a PC booted into a known state with an approved combination of hardware and software (whose licences have not expired). Finally, the Nexus works together with new `curtained memory' features in the CPU to stop any TC app from reading or writing another TC app's data. These new features are called `Lagrande Technology' (LT) for the Intel CPUs and `TrustZone' for the ARM.

                  Once the machine is in an approved state, with a TC app loaded and shielded from interference by any other software, Fritz will certify this to third parties. For example, he will do an authentication protocol with Disney to prove that his machine is a suitable recipient of `Snow White'. This will mean certifying that the PC is currently running an authorised application program - MediaPlayer, DisneyPlayer, whatever - with its NCA properly loaded and shielded by curtained memory against debuggers or other tools that could be used to rip the content. The Disney server then sends encrypted data, with a key that Fritz will use to unseal it. Fritz makes the key available only to the authorised application and only so long as the environment remains `trustworthy'. For this purpose, `trustworthy' is defined by the security policy downloaded from a server under the control of the application owner. This means that Disney can decide to release its premium content only to a media player whose author agrees to enforce certain conditions. These might include restrictions on what hardware and software you use, or where in the world you're located. They can involve payment: Disney might insist, for example, that the application collect a dollar every time you view the movie. The application itself can be rented too. The possibilities seem to be limited only by the marketers' imagination.

                  5. What else can TC be used for?

                  TC can also be used to implement much stronger access controls on confidential documents. These are already available in a primitive form in Windows Server 2003, under the name of `Enterprise rights management' and people are experimenting with them.

                  One selling point is automatic document destruction. Following embarrassing email disclosures in the recent anti-trust case, Microsoft implemented a policy that all internal emails are destroyed after 6 months. TC will make this easily available to all corporates that use Microsoft platforms. (Think of how useful that would have been for Arthur Andersen during the Enron case.) It can also be used to ensure that company documents can only be read on company PCs, unless a suitably authorised person clears them for export. TC can also implement fancier controls: for example, if you send an email that causes embarrassment to your boss, he can broadcast a cancellation message that will cause it to be deleted wherever it's got to. You can also work across domains: for example, a company might specify that its legal correspondence only be seen by three named partners in its law firm and their secretaries. (A law firm might resist this because the other partners in the firm are jointly liable; there will be many interesting negotiations as people try to reduce traditional trust relationships to programmed rules.)

                  TC is also aimed at payment systems. One of the Microsoft visions is that much of the functionality now built on top of bank cards may move into software once the applications can be made tamper-resistant. This leads to a future in which we pay for books that we read, and music we listen to, at the rate of so many pennies per page or per minute. The broadband industry is pushing this vision; meanwhile some far-sighted people in the music industry are starting to get scared at the prospect of Microsoft charging a percentage on all their sales. Even if micropayments don't work out as a business model - and there are some persuasive arguments why they won't - there will be some sea-changes in online payment, with spillover effects for the user. If, in ten years' time, it's inconvenient to shop online with a credit card unless you use a TC platform, that will be tough on Mac and GNU/linux users.

                  The appeal of TC to government systems people is based on ERM being used to implement `mandatory access control' - making access control decisions independent of user wishes but based simply on their status. For example, an army might arrange that its soldiers can only create Word documents marked at `Confidential' or above, and that only a TC PC with a certificate issued by its own security agency can read such a document. That way, soldiers can't send documents to the press (or email home, either). Such rigidity doesn't work very well in large complex organisations like governments, as the access controls get in the way of people doing their work, but governments say they want it, and so no doubt they will have to learn the hard way. (Mandatory access control can be more useful for smaller organisations with more focused missions: for example, a cocaine smuggling ring can arrange that the spreadsheet with this month's shipment details can be read only by five named PCs, and only until the end of the month. Then the keys used to encrypt it will expire, and the Fritz chips on those five machines will never make them available to anybody at all, ever again.)


                  Here's the patent from Ms/intel and company in 2001:


                  Maybe you guys will wake up there's been a war on the general computer for over 23+ years because intel, microsoft and and want to kill piracy. The advent of windows 10/11 is the end of programs as local applications because the vast majority of PC users are irrational/computer illiterate.

                  TPM and other tech inside newer machines is literally a digital secret police on your PC.


                  • #29
                    Soon99 Thank you for the well written, sourced, and detailed information. I will lose some sleep over this tonight thinking about it, but in the end what can we expect really? I'm very disappointed at these so called "hackers" who do all kinds of crazy things but have done little to break the Intel Management Engine and better yet, AMD's PSP. Now that this is moving on to extremes like the things you've mentioned I have little hope for the future. There was some information I read a couple of years ago that the NSA is actively targeting computer users who care about their privacy and it seems that ME/PSP is not enough for them. We can only hope that there will be a way to evade this in some way.


                    • #30
                      Originally posted by PublicNuisance View Post
                      AMD also had no Coreboot support before and used PSP so it's not like they were saints before Pluton.
                      There are coreboot-supported AMD boards without PSP, i.e.: Lenovo G505S with A10-5750M, ASUS A88XM-E with A10-6800K, ASUS AM1I-A with Athlon 5370. They are from 15h / early 16h generations which didn't have AMD PSP. Although these quad-core CPUs are from 2013-2014, they are still powerful enough for the modern tasks.