Announcement

Collapse
No announcement yet.

Devuan 3.0 "Beowulf" Reaches Beta For Debian 10 Without Systemd

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Britoid View Post

    Didn't realise Debian didn't allow you to uninstall systemd...


    oh wait.
    This literally could have just been a .deb, maybe on an APT repo. I remember back in early days of systemd, I was able to build systemd from a tarball and slap it in as default init, and it just worked (just needed to write one .service to drop sysvcompat).

    Comment


    • #22
      Originally posted by oiaohm View Post

      Yes LUKS is extendable its not homed only option.
      https://wiki.archlinux.org/index.php...rage_mechanism

      homed in fact supports storage backend in it design some are made already.

      Unencrytpted
      1) a directory
      2) btrfs subvolume.

      Encrypted
      1) fscrypt directory in supported file systems that is current ext4 and f2fs. There is work to bring this to btrfs as well. Can you show me progress on fscrypt support on ZFS integrated with mainline kernel. That right because you cannot because ZFS is not working on mainlining.
      2) LUKS. This one is interesting and way more complex. It contains a GPT partition table and as noted for now it should only contain a single partition. Please note the for now. Future plans include per user encrypted swap inside the LUKS with the home directory. So this is a different level of protection to what ZFS is designed to provide. Please note any encryption worth salt need to be properly peer reviewed.

      So there are currently 4 backend plugins into homed. Homed is in fact designed to take more.

      Ubuntu has a few options
      1) Ignore homed limitations and say anyone wanting to use it encrypted just cannot use ZFS the simplest solution.
      2) Add and maintain their own extra backend to homed for ZFS for encrypted. The non encrypted using the directory method will already work.
      3) Convince some how ZFS developers to work on the upstream linux kernel to get fscrypt and support by mainline systemd developers on homed.

      Those outcomes are in the order of likelihood.

      Basically this is one of the areas where ZFS not being mainline inside the Linux kernel properly is causing hell by causing fragmentation and disputes. Like it or not ZFS is a second class file system on Linux and it will stay that way while its not mainlined in the Linux kernel.
      You might not see how clunky it is if you don't have experience with ZFS.

      In ZFS you just have a volume in this case something like pool/home mounted to /home/user, it automatically shares all the storage of the pool, and any raid features of the pool. It can be compressed or encrypted or both in the correct order and is snapshot-able (at no cost), and can incrementally replicate to any remote storage not only another ZFS volume (you can even put it on a public cloud disk if it's encrypted because the private key isn't included, so google drive would work.) It doesn't rely on CIFS so any storage would work, even ssh. When it transfers it does an immediate delta copy, so it's useful for slow storage and can be resumed.

      They could theoretically work together, you could set an home image to be a ZVOL but a lot of the benefit of homed would be redundant and ZFS's implementation is far simpler. From the high level it just has fewer parts and less admin. If you had both like Ubuntu does, I'm not sure why you'd use homed. I can't really see Ubuntu using both for any real reason.. it would just make the disk slow and cause it to duplicate work.

      homed seems to have a lot of limitations for little benefit..
      Last edited by k1e0x; 19 March 2020, 01:17 PM.

      Comment


      • #23
        As far as quality goes? Is anyone really an expert on crypto? How man DJB's and Bruce Schneier's are there and would they even claim it? It's one thing to call yourself a cryptologist, another to say you are a crypto expert. The implementation is taken from Solaris that is EAL4+ (At par or better than Linux) - OpenZFS's aspect of that implementation is fairly new and will have to stand over time, so far it's only existed for 4 years and been in production for a year. So far nobody has found a problem, this isn't true for OracleZFS.. and that had to be rewritten twice. Dedup in ZFS causes a known information leak.. (the block isn't written twice so it can't be encrypted differently. You tell me how to solve that, lol) You can do it if you want (Dedup sucks anyhow) but it will weaken your crypto.

        If you don't trust it, thats fine. You can still stick ZFS on GELI or LUKS if you want tho like every other filesystem. (including homed) you just lose some benefits and things get clunky (like homed)
        Last edited by k1e0x; 19 March 2020, 02:49 PM.

        Comment


        • #24
          Originally posted by k1e0x View Post
          You might not see how clunky it is if you don't have experience with ZFS.

          In ZFS you just have a volume in this case something like pool/home mounted to /home/user, it automatically shares all the storage of the pool, and any raid features of the pool. It can be compressed or encrypted or both in the correct order and is snapshot-able (at no cost), and can incrementally replicate to any remote storage not only another ZFS volume (you can even put it on a public cloud disk if it's encrypted because the private key isn't included, so google drive would work.) It doesn't rely on CIFS so any storage would work, even ssh. When it transfers it does an immediate delta copy, so it's useful for slow storage and can be resumed.

          They could theoretically work together, you could set an home image to be a ZVOL but a lot of the benefit of homed would be redundant and ZFS's implementation is far simpler. From the high level it just has fewer parts and less admin. If you had both like Ubuntu does, I'm not sure why you'd use homed. I can't really see Ubuntu using both for any real reason.. it would just make the disk slow and cause it to duplicate work.

          homed seems to have a lot of limitations for little benefit..
          Really about time you pull you head out your ass. Everything you just listed is pointless because ZFS is missing a very key feature. cgroup/namespace integration for isolated page tables to prevent data leaks.

          The other ways that homed is using may not be as neat but you will not have the data leaks locally with them..



          The work here. You want to maintain your own cache as ZFS outside the namespace system of the Linux kernel. This creates a nice little security hole.

          Sorry I have looked a ZFS k1e0x you arguement does not hold water its a insecure solution that was not designed even work securely with solaris zones let alone Linux cgroups/namepsaces.

          So basically all the other features you listed are tits on bulll class of pointless when you don't have the basics covered.

          homed is not only about mounting the home directory it dealing with uid/gid conflit events so the protected home directory still works even if user id or group id has to be changed on the fly.

          Please note if zfs supported fscrypt interface for encrypted directories that it does not at the moment yes all the other ZFS feature you listed would be usable with homed out the box but you are not integrated mainline you don't provide that interface.

          Also there is a reason why homed has not focused on sharing all the home directory across the storage pool as that would just make cleaning house after user removal more complex to be sure you have in fact removed it all as well. So there are a lot of features that ZFS has that are pointless for a lot of the homed use cases.

          Comment


          • #25
            Originally posted by Antartica View Post

            There are really strange dependencies on systemd in debian 10 "buster". I've recently upgraded from a debian 9 "stretch" without systemd (that machine is using sysvinit with a barebones window manager, ratpoison), and the upgrade uninstalled the intel video drivers because it had sysvinit-core on "hold".

            In order to fix that I've had to repackage the intel video driver so that it doesn't have dependencies (specifically
            xserver-xorg-video-intel as well as xserver-xorg-core and xserver-xorg-input-libinput). Not the cleanest of the fixes, but enough for now.

            And then I tried to use virt-manager and noticed that the upgrade broke it too!
            And people shout at me when I say apt is garbage as a package manager, and that any issue in packaging comes from installing third party repositories.

            Comment


            • #26
              Originally posted by Antartica View Post
              So it's not that systemd is not working, but that the dependencies on systemd break unrelated software even though that software works without systemd. The blame should be on the current debian packaging dependencies.
              It seems the maintainers started showing their true colors now, and decided what is the only supported init in Debian regardless of the bullshit vote, so we can stop having a bunch of services that are just shims to an init script or crappy basic init scripts as a "fallback" in case the system does not have systemd.

              It's not anymore "the universal operating system", as the embedded or single-purpose scenarios of yore doesn't work now in this systemd future.
              What is stopping you from using systemd in embedded devices and single-purpose scenarios, it's not lack of resources I guess, because Debian is a pig if compared to proper embedded-first distros like OpenWrt (that simply won't adopt systemd because of actual resource constraints and they have their own custom init).

              Also, if you want somewhere to migrate to, OpenWrt is a good choice I guess.

              Comment


              • #27
                Originally posted by oiaohm View Post

                Really about time you pull you head out your ass. Everything you just listed is pointless because ZFS is missing a very key feature. cgroup/namespace integration for isolated page tables to prevent data leaks.

                The other ways that homed is using may not be as neat but you will not have the data leaks locally with them..

                https://fosdem.org/2020/schedule/eve...nux_Kernel.pdf

                The work here. You want to maintain your own cache as ZFS outside the namespace system of the Linux kernel. This creates a nice little security hole.

                Sorry I have looked a ZFS k1e0x you arguement does not hold water its a insecure solution that was not designed even work securely with solaris zones let alone Linux cgroups/namepsaces.

                So basically all the other features you listed are tits on bulll class of pointless when you don't have the basics covered.

                homed is not only about mounting the home directory it dealing with uid/gid conflit events so the protected home directory still works even if user id or group id has to be changed on the fly.

                Please note if zfs supported fscrypt interface for encrypted directories that it does not at the moment yes all the other ZFS feature you listed would be usable with homed out the box but you are not integrated mainline you don't provide that interface.

                Also there is a reason why homed has not focused on sharing all the home directory across the storage pool as that would just make cleaning house after user removal more complex to be sure you have in fact removed it all as well. So there are a lot of features that ZFS has that are pointless for a lot of the homed use cases.
                Lol you just totally jumped the shark here. Let me see if I have this right..

                A cross platform filesystem... we are talking about a file system.. does not have CPU sandbox separation, therefor it's insecure? You have much larger problems on your hands if *this* is what you think your attack vector is. You can't protect kernel tasks from root. ( well maybe you can but you'd need to be on Solaris or BSD)

                But lets continue.. so.. therefor you declare this a security hole (even tho it's a mitigation feature) and so according to you it has NO use cases. Mmm.. no that isn't sound logic. But.. you then in the next breath say homed ignores Unix uid/gid, well that is security as well.. and all that tells me is RedHat can't figure out how to use LDAP internally. uid/gid are important specially in a NFS network but you know if RedHat wants to go back to DOS.. be my guest.

                fscrypt should work just fine with ZFS. Anything file level works.

                Lastly... you go to the extreme and say "Why would anyone want to use all the storage in their system??? it just makes cleaning it up later harder." Really? Ya.. I can't think of a reason ppl would want to use the resources in their computer.... It just mucks everything up. Know what? You can keep it even cleaner by not turning it on in the first place. It's true. lol

                You can see that RedHat is doing things here to let Linux operate in only a Microsoft Windows enterprise network.. Is that how it's going to be? We just going to let Microsoft have the battle and abandon NFS for CIFS and LDAP for AD? I say no.
                Last edited by k1e0x; 19 March 2020, 09:49 PM.

                Comment


                • #28
                  Originally posted by k1e0x View Post
                  A cross platform filesystem... we are talking about a file system.. does not have CPU sandbox separation, therefor it's insecure? You have much larger problems on your hands if *this* is what you think your attack vector is. You can't protect kernel tasks from root. ( well maybe you can but you'd need to be on Solaris or BSD)
                  No this show how wrong you are even putting the file system in a sandbox is a failure. selinux on Linux has been able to block of root access on Linux to kernel tasks. These coming changes are different. Kernel address space isolation work is breaking into multi page tables sets in kernel space. So not even a kernel task can see all of memory. This alteration means those that support it being in kernel space as the wrong namespace root or kernel process you are now in the wrong page tables so now cannot see the protected data. Fscrypt and LUKS in the Linux kernel already supports this kind of splitting and it coming to the general file access as well.

                  Linux kernel mainline file system design is going in direction ZFS is not following.

                  Originally posted by k1e0x View Post
                  fscrypt should work just fine with ZFS. Anything file level works.
                  Not properly because ZFS is not integrated into page-table of linux kernel that is in fact tracking data from block device to file system. fscrypt is a common API used by Linux kernel filesystems for implementing encryption not a pure file system level thing because it has memory protection is reaching back down though the stack. Fscrypt requires functional support from the file system for security reasons like data memory isolation that ZFS does not provide at this stage.

                  Basically where is ZFS ablity to perform a operation where all data about that operation is kept in a restricted page table and not spread processing wide into caches. So even the encrypted form read from disc is not put into general memory anywhere. Because this protected memory splitting need to be functional on non encrypted file access as well.

                  If X not encrypted directory is only to be access X namespace the caching enteries from the file system should only be in that namespace kernel space page table this is your basic starting point. This will require a major restructure in way ZFS operates. Mainline Linux kernel filesystems and pagetable are being restructured todo this.

                  So like it or not the LInux mainline kernel is going in a different direction to ZFS development at the moment.

                  Originally posted by k1e0x View Post
                  homed ignores Unix uid/gid, well that is security as well.. and all that tells me is RedHat can't figure out how to use LDAP internally. uid/gid are important specially in a NFS network but you know if RedHat wants to go back to DOS.. be my guest.
                  This

                  Originally posted by k1e0x View Post
                  You can see that RedHat is doing things here to let Linux operate in only a Microsoft Windows enterprise network.. Is that how it's going to be? We just going to let Microsoft have the battle and abandon NFS for CIFS and LDAP for AD? I say no.
                  And this is the same thing. Redhat is working on freeipa for network authentication this is a NFS and LDAP solution with lots of extras. Homed is not about that problem.

                  Lets say you are managing many vm does each vm when someone logs in need to connect to a central LDAP. Remember these VM could be spread across multi hosting locations so now you have to replicate LDAP between hosting locations and pay for running LDAP servers at each one that could be breached... Yes this is going down hill quickly as you are just expanding your attack surface area.

                  So you do in fact need a system to distribute local processed logins. Homed working on making it possible to make VM image that can be place anywhere that can have new users added after the fact without altering the image or having it hooked up to a LDAP server.

                  Basically like it or not there is a problem space LDAP does not address and homed is designed to address. There is also a problem space that you run into where something updates adds a user with a fix UID/GID that now conflits with a local user yes homed makes this problem auto detecting and resolving basically a lot smarter local account system.

                  Think about some cases LDAP is need other cases a smart local account system that resolves conflict problems is all you really need. Being local it does not have to be network connecting to other devices.

                  Something those suggesting LDAP don't consider. Lets say I need to pull a server from production because of possible malware infection or hardware issue. At this point you want this box connected to the least things possible. So being able to add a maintenance user on USB drive without need to connect machine to network is a useful feature the LDAP option does not cleanly provide. Think about it when that USB key is removed from the server with homed that maintenance user login no longer exists to be hacked. See the next problem with LDAP its not really good for maintenance users as really simple without anything physical visible to leave a maintenance user active with a poor password.

                  Homed is truly a different class of problem to LDAP.

                  Homed auto resolve work could in time make LDAP support simpler to implement in a mixed distribution environment than what it currently is.

                  You mention AD. One of the advantages of AD with Windows advantages is when using mixed versions of Windows everything still works right most of the time because of the Windows system for ID handling. Homed is working on adding more advanced ID handling to Linux. Really homed is really a stepping stone to fix one of the issue that make it hard on the desktop to take the fight to Microsoft.

                  Yes you need to stop pushing LDAP as fix here see it problem and see what needs fixing. If you don't like what homed is doing you need to develop something else that covers it problem space that runs locally if it requires a network service like LDAP to work you are outside this problem space of local accounts. Local accounts confliting with LDAP accounts is a problem that does happen. Account ID(UID/GID) conflict resolution method is kind a thing we need yesterday that no one really bothered making a fix to until homed .

                  Comment


                  • #29
                    Originally posted by starshipeleven View Post
                    What is stopping you from using systemd in embedded devices and single-purpose scenarios, it's not lack of resources I guess, because Debian is a pig if compared to proper embedded-first distros like OpenWrt (that simply won't adopt systemd because of actual resource constraints and they have their own custom init).

                    Also, if you want somewhere to migrate to, OpenWrt is a good choice I guess.
                    Using Debian is mostly because of familiarity, abundance of precompiled packages and scalability. I can use the same base in embedded scenarios, in servers and in quite beefy workstations. It is very easy to customize (substituting the initrd with a custom one, substituting the init with a simple script as in netbsd, making it boot in a read-only file system, starting X avoiding session managers or logins, etc), and all of this avoiding cross-compiling.

                    For the embedded scenarios OpenWRT, Alpine or Yocto would be a good fit, but would require more work for similar results. The big advantage of them is the resulting image size. With Debian you're looking at 256MB at minimum, whereas OpenWRT and Alpine can be squeezed in as low as 8MB, IIRC. But for x86, even in embedded scenarios, 256MB is not a really difficult requirement to fulfill, and boot times can be optimized with init customizations.

                    Comment

                    Working...
                    X