Announcement

Collapse
No announcement yet.

GNOME To Warn Users If Secure Boot Disabled, Preparing Other Firmware Security Help

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Waethorn
    replied
    Originally posted by uid313 View Post

    COTS means Commercial Off The Shelf, so all COTS motherboards and COTS computers that consumers and end-users buy in stores, they all have the Microsoft key preinstalled.
    You mean except for Apple, of course.

    Leave a comment:


  • uid313
    replied
    Originally posted by sinepgib View Post

    I'm aware, but I don't trust it won't lead to simply a proliferation of runtimes.
    > "Oh, I made this codedrop in 2020, so it's using Ubuntu 20.04 base."
    > "Now it's 2022, let's go with 22.10!"
    > "I'm much more of a fan of Fedora so..."
    It's not a promising future in my eyes.



    What is "COTS"? The Microsoft key does come preinstalled in most cases. The issue is that, essentially, some new ones disable the key used for the shim by default IIUC. I may have misunderstood the situation tho.
    COTS means Commercial Off The Shelf, so all COTS motherboards and COTS computers that consumers and end-users buy in stores, they all have the Microsoft key preinstalled.

    Leave a comment:


  • Malsabku
    replied
    I hope distributors have the possibility to disable this (un)feature.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by Waethorn View Post
    Antivirus generally sucks. Much of the attacks are hacks through insecure applications like RDP or brute-forcing SAM in Windows or WAN services in Linux that aren't exclusively-whitelisted like SSH. Cryptolocker attacks are often themselves encrypted, making signature detections impossible. Heuristics are far better than signature-based solutions, but verifiability of applications goes a long way to secure a desktop.
    It's complementary. Of course, if you can ensure verifiability that's probably much better.

    Originally posted by Waethorn View Post
    Servers run services. Most of those are in containers with layered sandboxing and I don't see a lot of complaints about how that's implemented so far - Docker and orchestration tools have taken off in popularity. By your own argument, you said that doesn't work for desktops so not sure what you're getting at there.

    Checking process versions of every application just isn't feasible IMO. You can't do sandboxing of applications from the OS without doing dependency controls.
    You seem to still be arguing sandboxing as if it was the same thing as containerization, so let's just agree on this definition please: https://en.wikipedia.org/wiki/Sandbo...puter_security)
    Sandboxing is not about the app, it's about the privileges the OS gives it to work. You don't need to check versions of anything. You default to not allowing much to be done, and expect the app to ask for permissions and the user to decide whether those permissions make sense.
    Containerization covers two different things: sandboxing and deployment. Docker, Snap, Flatpak and friends implement containerization in that sense. They take care of isolation, but also of shipping deps in the form of runtimes/layers. But the major memory and storage consumption comes from the latter only. You don't need extra software on the app and lib layer to apply a sandbox.
    Basically, sandboxing on Linux means properly configured namespaces and not much more than that.

    Regarding Docker on the server, as I mentioned it's a much different scenario. You control where it'll be running, you need to set up and tear down those computers dynamically so having something that doesn't require separate configuration steps is critical, etc. Note that before containers you'd use virtual machines in servers precisely because you needed both of the properties of containers (some people still do). Compare that to a desktop, does the typical user run a VM per application. You need to respect the fact end users have a wider variety of hardware, often too weak to bloat with these solutions. An extra 100MB per container won't change a thing on a server with 128GB of RAM and many TBs of storage. A laptop with 4GB of RAM and 120GB of SSD can't be spending that much for all its applications. You need to apply sandboxing by other means there.

    Originally posted by Waethorn View Post
    About Flatpaks: sure you can ship proprietary software. Some repos don't allow that, such as the Fedora Flatpak repo. Fedora's repo keep the Flatpaks mostly similar to the original RPM package whereas the Flathub repo can use whatever they want (within reason and compatibility concerns). Fedora says that they would like to see as many packages as possible use Flatpak but will continue to support and include native RPM's wherever possible. Since everything in their repo is GPL, I don't see there being anything in Fedora's Flatpak builds that aren't already in RPM, but there will be RPM's that aren't going to be in Flatpak format. Flatpaks are designed mainly for desktop apps though. Server apps will will just use Docker, Podman, or some other headless service container system. Fedora identifies that desktop technologies and server technologies are different animals, so they will use the right tool for the job. Canonical decided to make a single format for both which fractures the ecosystem in more than one way.
    Out of tree programs (often, but not always proprietary) are pretty much the reason all those solutions exist for the desktop. Linux has been largely optimized for a distribution repository workflow and everything has been hard to ship. That is an unsolvable problem for Linux because of the Bazaar model, it'll always be hard to ship, unless you resort to hacks that ship all dependencies, including the base system (runtimes can be that arbitrary). It's nice that they added sandboxing in the mix, but that is not and never was the reason to create Flatpak. You can use Bubblewrap for sandboxing, which is actually what Flatpak uses, without paying the high price of shipping the kitchen sink.
    That distros are starting to also ship their own packages as Flatpak is simply baffling.
    Regarding Canonical, my focus right now is not their inability to share existing solutions nor their incompetence to make good implementations of their NIH alternatives. I'm talking of the concept in general. Sure, Canonical's is worse, but that's beyond the point.

    Originally posted by Waethorn View Post
    Newer UEFI specifications support capsule updates where the package is uploaded and the program self-configures on next boot without needing the OS to do anything. Microsoft figured out how to do this safely since they ship a Surface firmware update package every month. I don't know if TianoCore supports capsule updates. I would imagine you would need some kind of ROM chip program to support that.
    I think you mean flashrom, it's the standard tool to flash chips with Coreboot.

    Originally posted by Waethorn View Post
    Coreboot came from LinuxBoot. They'd expanded to include additional payloads, but there's already support for a U-Boot payload there.
    Yes. Rather than expanding, they rearchitected everything so Linux became a regular payload. Regarding U-Boot support, I think its there mostly for it to be useful for ARM devices. I guess you could use it on x86, but it sounds weird.

    Quite frankly, NERF looks more interesting since it doesn't seek to replace platform initialization with Coreboot.[/QUOTE]

    What's the fun in that.

    Originally posted by Waethorn View Post
    I would rather prefer just to see disk init as part of the standard system init instead of being separated out into a different process as it is currently with Coreboot.
    It makes it quite straightforward to set up. I like to have my options honestly. I'm fully aware most people would need either SeaBIOS or TianoCore due to that being what the major OSes expect, but I for example like the idea of having some (non-critical) computers simply use FILO to boot Linux, or even boot Linux from ROM.

    Originally posted by Waethorn View Post
    I've said before that systems are being designed to be too complicated. Multiple levels of caching, a hardware management OS (Intel ME) booting a firmware OS (UEFI) into another system OS, all with redundant drivers... If computers were designed where hardware platform support was left to the hardware vendors themselves, writing their own drivers as they always wanted to, and the OS was all just one big usermode, computers would be far simpler to troubleshoot.
    We've been there and it was terrible. And specifically the multiple levels of caching are pretty much the reason we have decent performance. If there's one critical feature for performance in modern CPU architecture is hierarchical caching. Take that away and everything else falls. IME and that random stuff, yeah, that's a bit shady.

    Leave a comment:


  • Waethorn
    replied
    Originally posted by sinepgib View Post

    That's not at all what I meant tho. Maybe it got lost in translation, what I meant is that people tend to believe Linux is immune while in reality it's just not yet a major target. If our ambitions of it becoming a major platform come true, we're pretty much fucked because we think we're too good for an antivirus
    We shouldn't wait for massive catastrophe to try and protect ourselves. And besides, it's still a major target when it comes to servers, so why not leverage a solution designed for those in our desktops?



    But we agree. It's just you isolate at the process level. It doesn't matter whether you use the same version of the library or not, because what you check is the process. Library calls that try to go past what the process is allowed to do will fail. You don't need the whole of Snap or Flatpak to apply sandboxing, and it's just an extra measure. It's not a replacement for application level firewalls, but complementary.



    Yes, we agree. That's why they're very bad approaches. But the problem stems from dependency control, not from sandboxing.



    You can ship proprietary software with Flatpaks and Snaps, that's independent from the technology being open source.



    This sounds just like SeaBIOS tho. The problem with upgrading the ROM too often is mostly that its writes are much more limited. It supports about 10k writes. We've seen LTS kernels surpass the 256 limit for a single major release IIRC.



    Yeah, but U-boot is more or less meant for a single use in embedded. It loads a given image from a FAT partition. The idea of Coreboot is to be unopinionated about what you do after hardware bring-up.
    Antivirus generally sucks. Much of the attacks are hacks through insecure applications like RDP or brute-forcing SAM in Windows or WAN services in Linux that aren't exclusively-whitelisted like SSH. Cryptolocker attacks are often themselves encrypted, making signature detections impossible. Heuristics are far better than signature-based solutions, but verifiability of applications goes a long way to secure a desktop. Servers run services. Most of those are in containers with layered sandboxing and I don't see a lot of complaints about how that's implemented so far - Docker and orchestration tools have taken off in popularity. By your own argument, you said that doesn't work for desktops so not sure what you're getting at there.

    Checking process versions of every application just isn't feasible IMO. You can't do sandboxing of applications from the OS without doing dependency controls.

    About Flatpaks: sure you can ship proprietary software. Some repos don't allow that, such as the Fedora Flatpak repo. Fedora's repo keep the Flatpaks mostly similar to the original RPM package whereas the Flathub repo can use whatever they want (within reason and compatibility concerns). Fedora says that they would like to see as many packages as possible use Flatpak but will continue to support and include native RPM's wherever possible. Since everything in their repo is GPL, I don't see there being anything in Fedora's Flatpak builds that aren't already in RPM, but there will be RPM's that aren't going to be in Flatpak format. Flatpaks are designed mainly for desktop apps though. Server apps will will just use Docker, Podman, or some other headless service container system. Fedora identifies that desktop technologies and server technologies are different animals, so they will use the right tool for the job. Canonical decided to make a single format for both which fractures the ecosystem in more than one way.

    Newer UEFI specifications support capsule updates where the package is uploaded and the program self-configures on next boot without needing the OS to do anything. Microsoft figured out how to do this safely since they ship a Surface firmware update package every month. I don't know if TianoCore supports capsule updates. I would imagine you would need some kind of ROM chip program to support that.

    Coreboot came from LinuxBoot. They'd expanded to include additional payloads, but there's already support for a U-Boot payload there. Quite frankly, NERF looks more interesting since it doesn't seek to replace platform initialization with Coreboot. I would rather prefer just to see disk init as part of the standard system init instead of being separated out into a different process as it is currently with Coreboot. I've said before that systems are being designed to be too complicated. Multiple levels of caching, a hardware management OS (Intel ME) booting a firmware OS (UEFI) into another system OS, all with redundant drivers... If computers were designed where hardware platform support was left to the hardware vendors themselves, writing their own drivers as they always wanted to, and the OS was all just one big usermode, computers would be far simpler to troubleshoot.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by Waethorn View Post
    1) I was mentioning "security through obscurity" because you said Linux is in the clear due to it not being used much. That's not a reason to excuse lapses in security. Being "not a target" isn't the same as being secure. Mac's are more obscure than Windows, but when they get malware, Apple just shrugs and tells you to wipe and reinstall your OS, and to hell with your data because of disk encryption.
    That's not at all what I meant tho. Maybe it got lost in translation, what I meant is that people tend to believe Linux is immune while in reality it's just not yet a major target. If our ambitions of it becoming a major platform come true, we're pretty much fucked because we think we're too good for an antivirus
    We shouldn't wait for massive catastrophe to try and protect ourselves. And besides, it's still a major target when it comes to servers, so why not leverage a solution designed for those in our desktops?

    Originally posted by Waethorn View Post
    2) The majority of application sandboxing technologies don't allow applications access to common OS resources on the flat filesystem, otherwise what's the point? None of them give users control to what level of library resources are permitted by the application because, quite frankly, that's pretty dumb. The best you'd have is a permission system and/or firewall control.
    But we agree. It's just you isolate at the process level. It doesn't matter whether you use the same version of the library or not, because what you check is the process. Library calls that try to go past what the process is allowed to do will fail. You don't need the whole of Snap or Flatpak to apply sandboxing, and it's just an extra measure. It's not a replacement for application level firewalls, but complementary.

    Originally posted by Waethorn View Post
    3) The DLL Hell is all about version control of dependencies. When you're dealing with Flatpaks, Snaps, or any other application container system like this, you're going to deal with the same mess, which is what I thought you were getting at when it came to the talk about not having infinite drive space (and there's the RAM concern of multiple versions all running simultaneously).
    Yes, we agree. That's why they're very bad approaches. But the problem stems from dependency control, not from sandboxing.

    Originally posted by Waethorn View Post
    4) Flatpaks aren't proprietary. Neither are Snaps, but Canonical leverages it for their own interest like they do almost all of their own projects. The rest is made by Debian (well it's mainly by Red Hat employees in most cases).
    You can ship proprietary software with Flatpaks and Snaps, that's independent from the technology being open source.

    Originally posted by Waethorn View Post
    Firmware shouldn't be harder to update what with fwupd being available now. If not, they should build in basic disk initialization into coreboot and go back to automatically booting off the "x sector" like they did in the good ol' days.
    This sounds just like SeaBIOS tho. The problem with upgrading the ROM too often is mostly that its writes are much more limited. It supports about 10k writes. We've seen LTS kernels surpass the 256 limit for a single major release IIRC.

    Originally posted by uid313 View Post
    The basic boot "firmware" on a Raspberry Pi is just loaded off disk. Many other ARM chips use U-boot or something custom.
    Yeah, but U-boot is more or less meant for a single use in embedded. It loads a given image from a FAT partition. The idea of Coreboot is to be unopinionated about what you do after hardware bring-up.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by uid313 View Post
    I think AppAmor can be configured by the Linux distribution, either way, it is not really being done. So yeah ApArmor and SELinux can be considered nerd-only things too.
    As per Flatpak and Snap they can use runtimes so then it doesn't have to be so large, like its large first time you install a package because then it also installs a runtime, but when you install more packages they're not as large because they use the existing installed runtime.
    I'm aware, but I don't trust it won't lead to simply a proliferation of runtimes.
    > "Oh, I made this codedrop in 2020, so it's using Ubuntu 20.04 base."
    > "Now it's 2022, let's go with 22.10!"
    > "I'm much more of a fan of Fedora so..."
    It's not a promising future in my eyes.

    Originally posted by uid313 View Post
    I don't think any COTS motherboard or COTS computer comes without UEFI Secure Boot enabled and the Microsoft key pre-installed.
    What is "COTS"? The Microsoft key does come preinstalled in most cases. The issue is that, essentially, some new ones disable the key used for the shim by default IIUC. I may have misunderstood the situation tho.

    Leave a comment:


  • uid313
    replied
    Originally posted by sinepgib View Post
    Aren't the first three also just for nerds who manually set it up?
    Regarding Flatpak and Snap, I don't think those will bring sandboxing to the masses because their cost (because of stuff unrelated to sandboxing) is just too big to assume general adoption. Not everyone has infinite storage and RAM.
    I think AppAmor can be configured by the Linux distribution, either way, it is not really being done. So yeah ApArmor and SELinux can be considered nerd-only things too.
    As per Flatpak and Snap they can use runtimes so then it doesn't have to be so large, like its large first time you install a package because then it also installs a runtime, but when you install more packages they're not as large because they use the existing installed runtime.

    Originally posted by sinepgib View Post
    Some users will see it briefly tho. Some new boards come with 3rd party keys disabled, so they may need to access the UEFI menu before being able to boot without the warning. I don't think it's that much of a problem tho.
    I don't think any COTS motherboard or COTS computer comes without UEFI Secure Boot enabled and the Microsoft key pre-installed.

    Leave a comment:


  • Waethorn
    replied
    Originally posted by sinepgib View Post

    How is signing binaries security through obscurity? Security through obscurity refers to hiding the source code in the hope exploits won't be found AFAIK. We know from experience that doesn't work. But you can verify your boot for open source code made with reproducible builds, guaranteeing indeed the source code you see is the binary code you run.
    I don't understand the locked doors metaphor, since this is precisely locking your door.



    You don't need to sandbox dependencies in a different way. You expose a limited view of the filesystem/network via the kernel itself, meaning the only policies you need will be at the application level: they can call whatever code they want, they'll still see this limited view. I agree sandboxing is not the end of all security measures, but it's just another layer. You don't aim for perfect security because that does not exist. Instead, you aim for making it unpractical, too hard to bother. Sandboxing is about exposing only what the user agrees with the developer is necessary for the application to be useful. Nothing more, nothing less.
    The shipping dependencies (DLL hell) has nothing to do with sandboxing, which is why I said Snap and Flatpak's cost is not sandboxing related. That has more to do with the lack of a central authority enforcing userspace libraries compatibility, which means targeting Linux as an end user platform is hard. The lazy solution was to containerize everything. A piss poor one IMO. But that wasn't about security, and we agree in all of its flaws AFAICT.
    It may be true that it leads to laziness tho, but IMO that predates these solutions, and this is the mitigation for that fact.



    Yep. This is particularly messy in Linux. And there's no way to fix it because of the bazaar model of development. It'll be broken forever. Thus, Snap and Flatpak. I avoid them like the plague, but I can take that luxury because I seldom use anything proprietary.



    As I said, those issues with containerization are real, but they are orthogonal to sandboxing. For example, you can have regular Firefox sandboxed with Firejail, even if unpractical to use for regular users. I share the dislike for containerization in the desktop (servers are a whole different story, they add much more value in terms of ease of scaling your service by means of adding servers with nearly zero extra effort, compared to the traditional bare metal ways; this also applies to VMs, just more efficiently).
    1) I was mentioning "security through obscurity" because you said Linux is in the clear due to it not being used much. That's not a reason to excuse lapses in security. Being "not a target" isn't the same as being secure. Mac's are more obscure than Windows, but when they get malware, Apple just shrugs and tells you to wipe and reinstall your OS, and to hell with your data because of disk encryption.

    2) The majority of application sandboxing technologies don't allow applications access to common OS resources on the flat filesystem, otherwise what's the point? None of them give users control to what level of library resources are permitted by the application because, quite frankly, that's pretty dumb. The best you'd have is a permission system and/or firewall control.

    3) The DLL Hell is all about version control of dependencies. When you're dealing with Flatpaks, Snaps, or any other application container system like this, you're going to deal with the same mess, which is what I thought you were getting at when it came to the talk about not having infinite drive space (and there's the RAM concern of multiple versions all running simultaneously).

    4) Flatpaks aren't proprietary. Neither are Snaps, but Canonical leverages it for their own interest like they do almost all of their own projects. The rest is made by Debian (well it's mainly by Red Hat employees in most cases).

    Leave a comment:


  • Waethorn
    replied
    Originally posted by sinepgib View Post

    Was it? Then maybe I misunderstood the question.



    No idea. But Linux itself also supports its own key to verify modules, so if you boot Linux via Coreboot in your ROM (AFAIK it can't boot anything on disk without a payload doing so) you could load verified modules after that. There's no major risk of tampering with whatever's in ROM IMO, so you don't need to sign that part, or at least it's not as critical to do so.



    I agree. Using legacy BIOS is a bad idea if you care about security or flexibility.



    Well, firmware is harder to update, and you need up-to-date kernels for security. Linux is often used as a temporary payload, making it kexec to another kernel on disk instead I wouldn't recommend using it directly from ROM unless you're doing it for fun. But SeaBIOS in particular sounds rather dumb. If you don't trust UEFI then you could do something like adding signatures to FILO (assuming it doesn't have them, I haven't checked) or using GRUB, as you suggest, but since you're using an open source implementation anyway you could audit it I guess.
    Firmware shouldn't be harder to update what with fwupd being available now. If not, they should build in basic disk initialization into coreboot and go back to automatically booting off the "x sector" like they did in the good ol' days. The basic boot "firmware" on a Raspberry Pi is just loaded off disk. Many other ARM chips use U-boot or something custom.

    Leave a comment:

Working...
X