Announcement

Collapse
No announcement yet.

Intel Planning To End Legacy BIOS Support By 2020

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Would be nice if we could finally go to something better, like u-boot or openboot.
    PC has really been lagging in terms of manageability by means of something like a serial console.
    Intel did have a serial based bootloader for one of their CE variants of the atom. But alas, their linux support stopped at 2.6.28 for that CE series last time I tried to compile something, and the current state on the interwebs seems that the existence of berryville has almost been wiped of the earth.

    Comment


    • #42
      Originally posted by madscientist159 View Post
      So...are people going to start migrating off x86 now, or off of Linux in 2020? There are still options now (OpenPOWER / ARM) but if people keep going after the cheapest Wintel / Android DRM-laden solutions now, there won't be any other options in 2020....
      Even ARM devices are starting to slowly switch over to UEFI from u-boot.

      Originally posted by Ardje View Post
      Would be nice if we could finally go to something better, like u-boot or openboot.
      u-boot better than UEFI? rotflmao.

      Comment


      • #43
        I like uboot over uefi any day. Its open source, it isn't not over complicated, nor it inclined on Windows (fuck PE EXEs and patented FAT32). Unlike tianocore it GPLed, implying no fucking blobs in my systems. Though not even uboot or coreboot could fix e.g. ME thing. Same for AMD "security" processors. Some hostile backdoor crap just runs on different CPU, so it would be fair to admit these days x86 systems are backdoored straight from factory. Whatever, UEFI was only meant to make it convenient to Wintel. Everyone else is out of luck. Really great way to create new standard: push it down to the throats under threat being unable to boot at all. I really like this racket-style engineering by Wintel.

        Comment


        • #44
          Originally posted by madscientist159 View Post

          How do you propose getting your custom firmware (that would take multiple man-years of effort to create, best case) past the lower levels of the DRM currently present on Intel and AMD systems (namely the ME, PSP, etc., implementing technologies like Boot Guard)? Those DRM technologies use strong cryptography that is basically unbreakable (hence why current hacks use other mechanisms like JTAG to break in -- this is useless to the "good guys" but excellent for bad actors).

          Again, if you're stuck on Windows, you have zero control and privacy anyway, and probably don't (or at least shouldn't) really care much about the hardware enforcing that lack of control. If you're on Linux you have a choice (for now), so why again do Linux users keep embracing the walled garden of x86? Not everything can be hacked past, and eventually the stiff penalties for hacking DRM systems will be enforced more or less universally -- don't take current lax enforcement as a sign of something that will continue, rather look at the recent arrests of various supposed "white hat" hackers for what will eventually happen to anyone trying to break into the x86 walled garden to run unauthorized software. Hic sunt dracones....
          Thanks for your idea, it seems quite inspiring.

          Are there strong cryptography or just still nobody brilliant looked at how to break it and have the courage and craziness to spread it to the public instead getting paid by Big Ones?

          Arrests will happen eventually on a larger scale, I agree about that. It's the new tech-corporatocracy. But rebel people will always appear, mostly considered as crazy or insane compared to the average person.

          Originally posted by SystemCrasher View Post
          I like uboot over uefi any day. Its open source, it isn't not over complicated, nor it inclined on Windows (fuck PE EXEs and patented FAT32). Unlike tianocore it GPLed, implying no fucking blobs in my systems. Though not even uboot or coreboot could fix e.g. ME thing. Same for AMD "security" processors. Some hostile backdoor crap just runs on different CPU, so it would be fair to admit these days x86 systems are backdoored straight from factory. Whatever, UEFI was only meant to make it convenient to Wintel. Everyone else is out of luck. Really great way to create new standard: push it down to the throats under threat being unable to boot at all. I really like this racket-style engineering by Wintel.
          My sarcasm-o-meter exploded. What happened?

          Comment


          • #45
            Originally posted by starshipeleven View Post
            inb4 people screaming at the obsolescence or something.

            I'm kinda saddened by the loss of the legacy BIOS mode as it's a loss of options, and I laugh in the face of statements like "will mitigate some security risks" or "allows for supporting more modern technologies" as it's just a module running in the shittiest firmware architecture ever.

            I wonder if there is any EFI application that can take the place of CSM/legacy bios mode.

            I personally don't mind much about its loss.
            I don't understand why UEFI is even a thing considering that you could load code to handle disks bigger than 2 TiB from MBR. I never liked UEFI and it's enforcement of making a huge ESP partition with an ancient file system for no good reason. I tend to pick BIOS over UEFI when there's no option to replace firmware.

            Comment


            • #46
              Originally posted by DoMiNeLa10 View Post
              I don't understand why UEFI is even a thing
              Intel tried its hand in fixing the shitshow that BIOS firmware was/is, and did so with their usual ridicolous overengineering.
              Given that Intel controls what the manufacturers must use to boot the Intel hardware, they basically forced every OEM to use UEFI.

              you could load code to handle disks bigger than 2 TiB from MBR.
              Please no, that's a shitty hack. MBR is a limited and obsolete partitioning system and should be let go. GPT is far superior.

              That said the gist of what you say is still true. There is nothing preventing to add true GPT support to BIOS. Hell, many BIOS firmwares were able to read files from Fat32 filesystems too, as they used this to load files to update themselves.

              I never liked UEFI and it's enforcement of making a huge ESP partition with an ancient file system for no good reason.
              256MB or even 512MB is not "huge" by any stretch of the imagination. The ancient file system was chosen because it's simple to make a driver to read it, every OS and microcontroller has a driver for it already and is not encumbered by silly patents.

              The idea behind the ESP partition isn't bad. It is less inflexible bullshit than MBR boot, as now you drop your bootloaders there instead of having only the Master Boot Record to place it, and now you can decide to multiboot by just selecting the OS with the UEFI boot screen, cutting the need for a true multiboot-capable bootloader.

              Also the idea behind Secure Boot isn't bad either. It allows to have some kind of trust that the system is booting a thing that was not screwed with. It's the implementation that sucks big way.

              Comment


              • #47
                Originally posted by starshipeleven View Post
                Intel tried its hand in fixing the shitshow that BIOS firmware was/is, and did so with their usual ridicolous overengineering.
                Given that Intel controls what the manufacturers must use to boot the Intel hardware, they basically forced every OEM to use UEFI.

                Please no, that's a shitty hack. MBR is a limited and obsolete partitioning system and should be let go. GPT is far superior.

                That said the gist of what you say is still true. There is nothing preventing to add true GPT support to BIOS. Hell, many BIOS firmwares were able to read files from Fat32 filesystems too, as they used this to load files to update themselves.

                256MB or even 512MB is not "huge" by any stretch of the imagination. The ancient file system was chosen because it's simple to make a driver to read it, every OS and microcontroller has a driver for it already and is not encumbered by silly patents.

                The idea behind the ESP partition isn't bad. It is less inflexible bullshit than MBR boot, as now you drop your bootloaders there instead of having only the Master Boot Record to place it, and now you can decide to multiboot by just selecting the OS with the UEFI boot screen, cutting the need for a true multiboot-capable bootloader.

                Also the idea behind Secure Boot isn't bad either. It allows to have some kind of trust that the system is booting a thing that was not screwed with. It's the implementation that sucks big way.
                Considering that I can get away with a 64 MiB boot partition, I consider wasting 512 MiB to be a huge waste. I'd rather use that empty space for my home partition rather than have it sitting around with no good use. I think an approach like MBR where you just read code from a given offset on the disk is better, as it lets the bootloader implement the way it loads itself. The partition table itself is another matter, I never needed more than 4 partitions, but I understand people want more flexibility in that regard.

                Comment


                • #48
                  Originally posted by debianxfce View Post
                  Write instructions how to clone a gpt disk with grub as easy a mbr disk.
                  https://www.phoronix.com/forums/foru...36#post1083836

                  tldr: use dd, and don't use Debian

                  Comment


                  • #49
                    Originally posted by DoMiNeLa10 View Post
                    Considering that I can get away with a 64 MiB boot partition, I consider wasting 512 MiB to be a huge waste.
                    That's the size for distros that use it as /boot, most others are in the 256MB range and Windows has 100MB afaik.
                    But still, as I said you are living in the past. We aren't' in 2009 anymore. Even a GB is mostly irrelevant on modern drives.

                    It's either part of the dozen GBs or so that are "wasted" as overprovision on a SSD (that you waste either by remembering to never fill up the thing or by making a smaller partition, I do the latter) or totally irrelevant for a multiTB mechanical drive.

                    I think an approach like MBR where you just read code from a given offset on the disk is better,
                    Even ssuming you have multiple slots for this, this limits the size of the bootloader to the space you reserved whenever you set your specification, which would eventually make it obsolete bullshit a decade down the line when "640k isn't enough for everyone" anymore.

                    Using a partition with a simple filesystem allows more flexibility without having to resort to retarded bullshit like implementing a short stub bootloader and then have an additional partition.

                    It's also much easier for mainteneance as now you don't need to write hex stuff raw to a drive offset but you can just copy files around with a OS.

                    it lets the bootloader implement the way it loads itself.
                    I also don't really like this from a security standpoint. Yes you can have a fully signed setup and then it would be fine, but if you are mostly forcing bootloaders to just be "boot managers" like with UEFI (i.e. they rely on the board firmware to do anything serious, like filesystem access) it's still significantly safer than just load and execute random shit from the MBR without requiring the hassle of fully signed boot.

                    Mind me, I'm not saying UEFI does this well. I'm saying that the general ideas behind what it does are very sound and that any statement resembling "BIOS way of things is better" is bullshit.

                    Comment


                    • #50
                      Originally posted by starshipeleven View Post
                      That's the size for distros that use it as /boot, most others are in the 256MB range and Windows has 100MB afaik.
                      But still, as I said you are living in the past. We aren't' in 2009 anymore. Even a GB is mostly irrelevant on modern drives.

                      It's either part of the dozen GBs or so that are "wasted" as overprovision on a SSD (that you waste either by remembering to never fill up the thing or by making a smaller partition, I do the latter) or totally irrelevant for a multiTB mechanical drive.

                      Even ssuming you have multiple slots for this, this limits the size of the bootloader to the space you reserved whenever you set your specification, which would eventually make it obsolete bullshit a decade down the line when "640k isn't enough for everyone" anymore.

                      Using a partition with a simple filesystem allows more flexibility without having to resort to retarded bullshit like implementing a short stub bootloader and then have an additional partition.

                      It's also much easier for mainteneance as now you don't need to write hex stuff raw to a drive offset but you can just copy files around with a OS.

                      I also don't really like this from a security standpoint. Yes you can have a fully signed setup and then it would be fine, but if you are mostly forcing bootloaders to just be "boot managers" like with UEFI (i.e. they rely on the board firmware to do anything serious, like filesystem access) it's still significantly safer than just load and execute random shit from the MBR without requiring the hassle of fully signed boot.

                      Mind me, I'm not saying UEFI does this well. I'm saying that the general ideas behind what it does are very sound and that any statement resembling "BIOS way of things is better" is bullshit.
                      Using spinning rust as a boot drive should be considered self harm these days. Booting from a given offset on a disk can be quite flexible, code in the allocated part can simply jump to a gap between partitions (this is how GNU GRUB does it). As for security, I don't think it's a good idea to rely on the manufacturer with their proprietary software, and you're better off with replacing firmware that can provide superior security with keys you sign yourself (coreboot can do that). I think that coreboot is a good example of how it should be done, you can simply have your init code jump to GRUB on your ROM and then chainload a configuration off your SSD. It provides a quite versatile boot environment that can handle plenty of usable file systems (with more coming down the road) and a shell to manually boot from other devices. It can be also compiled with a crypto key to verify whatever you're trying to boot, and you can audit the code unlike proprietary firmware. I just like simplicity and small specs, there's no need for huge UEFI when you can have firmware that jumps to a fixed offset on a disk and let people implement whatever they want on top of it.

                      Comment

                      Working...
                      X