Announcement

Collapse
No announcement yet.

Learn More About Systemd-Homed For How Linux Home Directories Are Being Reinvented

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oiaohm
    replied
    Originally posted by k1e0x View Post
    I love you man but.. I laugh so hard reading your posts. UID 1 over the wire? absolutely not, are you insane?
    DRBL, a free and open source software for diskless remote boot in Linux


    In the past I have done UID 1 over the wire. Ok I do want better protections that what I was using back then.

    Originally posted by k1e0x View Post
    You only get UID conflict if you use different version of Linux/Unix, if you use the same versions there is no conflict..
    That is not true there are some nice fun anti-virus/malware for mail servers that go the idea of randomising UID number every boot. These items where the Linux kernel user namespace comes tail saver. As in a fake UID 0 with UID mapping so no matter what the anti-virus/malware decided to choose it got the same UID number to the system got to believe what it liked inside the Linux kernel name space.

    Originally posted by k1e0x View Post
    Having different UID schema on a client doesn't mean your directory structure is broken.. it means the client is broken.
    How badly broken is that client. Bad enough that you cannot log in to fix it. That what the anti-virus for mail server from a very big company did. The auto hardening of this anti-virus also nuked root password and anyone with UID 0 password as well.

    Systemd homed means to find a not used local UID and create a user allows you to dig yourself out of hole when UID conflict hits.

    Originally posted by k1e0x View Post
    DeadRat created their own problem with allowing state information into /etc to cause the conflicts in the first place.
    Read only using snapshot method of change means there can be no state problem in /etc that Redhat is doing.

    Originally posted by k1e0x View Post
    User applications should never write to /etc and I'm going to say something really shocking here a lot of people are going to hate but underprivileged users should not be able to alter the system state or the network configuration etc.. sorry.. this is why we have authentication dialogues.
    Don't disagree. But that is not the /etc state problem.

    Sysadmins goes to change /etc creates a new snapshot of the /etc
    Sysadmins does all need changes in the /etc snapshot directory this would allow Administrator to save partway though a configuration change without effecting anyone.
    Sysadmins able to run a contained test run on this new snapshot.
    Sysadmin able to issue a go live command to switch the system in 1 instant switch between old configuration snapshot to new configuration snapshot so removing the state problem.


    This is not talking about under privileged users. This is talking about privileged user/application modifying the /etc while unprivileged application is reading from /etc. The unprivileged application may only in fact read part of the changes made by the privileged. This is the /etc state problem. The same happens with the Windows registry. This is a form of race condition bug.

    There is way to solve this these days with some form of snapshotting.


    I think the model for Desktop Unix has been so bused trying to get all the Unix variants to play nice and operate with the likes of Microsoft .. I'm not really sure what the proper modern example of that would be anymore.. Solaris? macOS? You got me.. RedHat accurately sees this problem, however their solution seems to stray from other examples and the historical pretty far. It is a Microsoft/Millennial solution to a very old problem and they need to go back and really think about what they are trying to do.


    Originally posted by k1e0x View Post
    The fork of Sun ZFS and OpenZFS happened at pool version 28. Right before Sun was sold and closed up OpenSolaris they released ZFS pool version 30 that had a totally busted encryption scheme. (The word is from Sun Engineers that Sun didn't exactly put their "A Team" on this.) Oracle had to spend a lot of time and energy to redesign it from the ground up and turns out.. they fucked it up again.. So it's broken in Oracle Solaris last I heard.. but who cares about Oracle.

    In all that Oracle and Sun mess find me a single FIPS certificate covering the ZFS encryption or the checksums. ZFS was bespoke uncertified junk encryption and checksums when Sun and Oracle did it..


    Originally posted by k1e0x View Post
    OpenZFS had been working on this problem for a very long time and it's been implemented for about 3 years now..
    OpenSSL being uncertified went along fine over a decade until the Heartbleed so 3 years is absolute nothing to write home about. All the faults that causes Heartbleed would have caused OpenSSL to fail FIPS certification. Basically have you learnt nothing from this.

    Originally posted by k1e0x View Post
    So far their design has held up.
    Basically if your defence is so far their design appears to hold up there is no way I am buying that.

    Originally posted by k1e0x View Post
    Certifications cost money but It's a nice design allowing for send/receive, deduplication and compression etc.
    Yes certification costs money and Redhat like it or not is paying same with other parties.


    Now of ZoL cannot afford to pay as a project it need to get the encryption and checksums out of itself somehow and processed in something that some other party is paying for the fips certification.

    Person doing this failed to understand why encrypted with compression is not recommend without serous review. Talking about putting encryption and compression into a single solution is path to hell. https://en.wikipedia.org/wiki/Known-plaintext_attack Known plaintext attack can be a side effect of the wrong form of compression used with your encryption.

    Yes deduplication also gives you clues for possible plaintext attack.

    Basically everything you have written as so called nice feature for encryption file system I read as this need massive review yesterday. Not a nice to design to validate at all so flaws are likely.

    Originally posted by k1e0x View Post
    Here is OpenZFS's implementation https://www.youtube.com/watch?v=frnLiXclAMo
    That is not OpenZFS implementation its a kind of overview of what the implementation should not without any certification to say that it works.

    Sorry that video does not make me happier. That complete video is basically a dumpster fire of all things that you should not do implementing a encrypted file-system unless you have serous money to pay for a massive review. Because there are so many creative ways it can screw up.

    A fun one I had once that I had was a AES encrypted 7z archive that was going onto Aes encrypted volume guess what I lucked out in the worst possible way when I perform a system check for classified data. A few files were just stored inside the 7z not compressed the result was horrible I lucked out with a mirror as in 1 aes key encoded the data then the second aes key that was meant to encode the data again instead decoded it. So what was meant to be encrypted data was in fact sent to disc as plain text. This was using all certified stuff just using it in a risk way.

    Your risks with encryption go up a lot when what you are using has not been properly certified there are so many minor ways you can screw encryption completely up and not notice for ages.

    ZFS solution for encryption is bespoke junk in the highly dangerous class.

    Leave a comment:


  • k1e0x
    replied
    Originally posted by oiaohm View Post

    This is project atomic and Fedora SilverBlue.



    Go watch the video again and pay closer attention. He gives examples of project atomic server deployments where the image is read only yet you want to be able to change the users who are able to performance maintenance on it. What he is working on is desktop and server.



    This is right that is part of the goal.



    This is where you go to idiot mode. Ansible, SaltStack and Puppet happen to have a common defect with the Windows registry. The fact you can have half state. Where application is modifying the windows registry or /etc and another application access the windows registry or /etc while the operation is half done so see a half state and does something stupid. When administrator manually edits the /etc directory this can also create a half state. Project atomic is basically swap snapshot to snapshot with no visible half state to applications.

    This alters you Copy on Write requirements does it matter that copy on write in areas going snapshot to snapshot are a little slow no. The more important thing is read speed.


    Sorry no bell called /etc because it meant "etcetera directory" basically garbage dump to put anything that did not fit anywhere else.

    Filesystem Hierarchy Standard latter on said that /etc should only contain text based configuration files. Does not say that the /etc directory has to contain only system configuration so by FHS it still legal to put user configuration files writable to the user in the /etc directory. Brilliant right still a garbage dump.



    Again failure to see the common defect between Windows Registry and allow using to directly write to the /etc directory. Like it or not playing with configuration files cause a state problem. Yes it one of the windows registery problems that when you are as administrator you can write where ever in it.

    Better workflow would look like following.
    Sysadmins goes to change /etc creates a new snapshot of the /etc
    Sysadmins does all need changes in the /etc snapshot directory this would allow Administrator to save partway though a configuration change without effecting anyone.
    Sysadmins able to run a contained test run on this new snapshot.
    Sysadmin able to issue a go live command to switch the system in 1 instant switch between old configuration snapshot to new configuration snapshot so removing the state problem.

    Please note this operational path does not require the /etc directory in usage to be read/write. So read only snapshot good enough.

    Heck it would help windows registry as well if this was implemented on windows.



    No ZFS native encryption is bespoke junk. No point to Canonical integrating it as ZoL does not have FIPS 140-2 certification on its encryption. Redhat method is using all FIPS 140-2 certified stuff.

    This is one of the big problems with ZFS these days is ZoL developers are not like the Sun ones back in the day. The Sun developers back in the day did all the FIPS required certifications and other encryption certifications. But time moves on what was FIPS certified in the past is not FIPS certified now.




    No it is in fact a Unix problem that Linux caught. UID/GID on disc on a file and in memory on a file with historic unixs have to be the same value. Samba does insane amount of abstraction go get UID/GID miss alignment to work.

    Life would be so many times simpler if we could say this directory has miss aligned UID/GID when reading/writing to the directories meta data use the UID/GID conversion table.

    This is not a directory server problem in fact using a directory server does not in fact fix it. One of the problem directory servers run into is UID/GID in directory conflicting with locally installed service. So the ability for a client computers to use different UID/GID values in memory for same user yet over the wire or on disc the user appears to be the 1 UID/GID value would be a god send to directory servers on Linux.



    Really him not wanting to talk about ZFS that much is because that is just a filesystem was in fact a statement of all of ZFS that is usable with ZoL due to lack of required certifications. What the point of going into ZFS encryption when that encryption path is not certified so not usable.

    Really funny that you took what he said the wrong way. Instead of waking up ZFS for Linux has a very long way it need to go you were like since he will not bother talking about the broken parts of ZFS it has to be magically better. Sorry to break your bubble but current ZoL design is busted and not certified the way it should be.
    Lol

    I love you man but.. I laugh so hard reading your posts. UID 1 over the wire? absolutely not, are you insane? You only get UID conflict if you use different version of Linux/Unix, if you use the same versions there is no conflict.. Having different UID schema on a client doesn't mean your directory structure is broken.. it means the client is broken (and not legitimately a valid client on your network, call security). there are other solvable methods to this that don't involve breaking NFS. If you're in a Unix domain style network and you have a proper LDAP then you really shouldn't be using Samba.. buuut.. we just can't get over the fact that Microsoft exists and does not play fair. Makes me wish Novell Netware was still around really..

    You can wag FHS and drop Posix standards on me all day. I don't really care much anymore .. Sure it's poorly named but /etc is for system configuration now and it's been like that for a very long time. Change my mind? - Linux breaks the shit out of man hier anyhow. The odd part is most of the layout of a Unix system's file hierarchy is designed specifically for what redhat is trying to change. That is thin clients and network mounts to user homes and programs. That is why /bin /usr/bin and /usr/local/bin exist. /bin is to boot the system, /usr/bin is a network image of software set by your admin and /usr/local/bin is anything -you- installed. It doesn't work anymore and Linux would go absolutely ape shit if you removed /usr/bin but.. ya.. different point. You get the idea, Linux is an idiom for broken Unix anyhow so it's no surprise the messed up /etc too.

    DeadRat created their own problem with allowing state information into /etc to cause the conflicts in the first place. So much shit is modifying things in there they have this problem now. User applications should never write to /etc and I'm going to say something really shocking here a lot of people are going to hate but underprivileged users should not be able to alter the system state or the network configuration etc.. sorry.. this is why we have authentication dialogues. "Oh we broke the system with systemd doing too much shit.. what now? Undo it? noo.. just force it to work and break it more!"

    I think the model for Desktop Unix has been so bused trying to get all the Unix variants to play nice and operate with the likes of Microsoft .. I'm not really sure what the proper modern example of that would be anymore.. Solaris? macOS? You got me.. RedHat accurately sees this problem, however their solution seems to stray from other examples and the historical pretty far. It is a Microsoft/Millennial solution to a very old problem and they need to go back and really think about what they are trying to do.

    This is one of the big problems with ZFS these days is ZoL developers are not like the Sun ones back in the day. The Sun developers back in the day did all the FIPS required certifications and other encryption certifications. But time moves on what was FIPS certified in the past is not FIPS certified now.
    You... really need to stop talking about ZFS as you know next to nothing about it but... at least you give me an example to set ppl straight. Ok grasshopper sooo.. The fork of Sun ZFS and OpenZFS happened at pool version 28. Right before Sun was sold and closed up OpenSolaris they released ZFS pool version 30 that had a totally busted encryption scheme. (The word is from Sun Engineers that Sun didn't exactly put their "A Team" on this.) Oracle had to spend a lot of time and energy to redesign it from the ground up and turns out.. they fucked it up again.. So it's broken in Oracle Solaris last I heard.. but who cares about Oracle. OpenZFS had been working on this problem for a very long time and it's been implemented for about 3 years now.. So far their design has held up. Certifications cost money but It's a nice design allowing for send/receive, deduplication and compression etc. Here is OpenZFS's implementation https://www.youtube.com/watch?v=frnLiXclAMo

    Leave a comment:


  • oiaohm
    replied
    Originally posted by k1e0x View Post
    He has some.. interesting views on systems that I tend to disagree with. Such as read only /etc
    This is project atomic and Fedora SilverBlue.

    Originally posted by k1e0x View Post
    That feels to me like what they want to do is deploy a desktop system (we are already in odd territory because Linux isn't primarily a desktop..)
    Go watch the video again and pay closer attention. He gives examples of project atomic server deployments where the image is read only yet you want to be able to change the users who are able to performance maintenance on it. What he is working on is desktop and server.

    Originally posted by k1e0x View Post
    But.. they want to deploy a desktop with a standard default image on it and do all the user management through the home directory.
    This is right that is part of the goal.

    Originally posted by k1e0x View Post
    (Presumably they feel the system would be managed through Ansible? The more complicated they can make that the more money they send to RedHat through licences and it hurts their competition such as SaltStack and Puppet. ) It's also highly single user. It kind of worries me they want this approach and I feel this is very reminiscent of a Windows style registry.
    This is where you go to idiot mode. Ansible, SaltStack and Puppet happen to have a common defect with the Windows registry. The fact you can have half state. Where application is modifying the windows registry or /etc and another application access the windows registry or /etc while the operation is half done so see a half state and does something stupid. When administrator manually edits the /etc directory this can also create a half state. Project atomic is basically swap snapshot to snapshot with no visible half state to applications.

    This alters you Copy on Write requirements does it matter that copy on write in areas going snapshot to snapshot are a little slow no. The more important thing is read speed.

    Originally posted by k1e0x View Post
    I would propose that /etc is the place where system configuration lives on Unix, that is the purpose of this directory.
    Sorry no bell called /etc because it meant "etcetera directory" basically garbage dump to put anything that did not fit anywhere else.

    Filesystem Hierarchy Standard latter on said that /etc should only contain text based configuration files. Does not say that the /etc directory has to contain only system configuration so by FHS it still legal to put user configuration files writable to the user in the /etc directory. Brilliant right still a garbage dump.

    Originally posted by k1e0x View Post
    It is not for state information. /etc should always be writable for a sysadmin and the configuration files should be in clear text. Configuration systems like the Windows registry are big reasons people use Linux in the first place. Sysadmins like SIMPLE configuration.. such as on FreeBSD.
    Again failure to see the common defect between Windows Registry and allow using to directly write to the /etc directory. Like it or not playing with configuration files cause a state problem. Yes it one of the windows registery problems that when you are as administrator you can write where ever in it.

    Better workflow would look like following.
    Sysadmins goes to change /etc creates a new snapshot of the /etc
    Sysadmins does all need changes in the /etc snapshot directory this would allow Administrator to save partway though a configuration change without effecting anyone.
    Sysadmins able to run a contained test run on this new snapshot.
    Sysadmin able to issue a go live command to switch the system in 1 instant switch between old configuration snapshot to new configuration snapshot so removing the state problem.

    Please note this operational path does not require the /etc directory in usage to be read/write. So read only snapshot good enough.

    Heck it would help windows registry as well if this was implemented on windows.

    Originally posted by k1e0x View Post
    They aren't however well integrated. Canonical are you listing? Get GDM working with ZFS native encryption and you'll have a better solution than Redhat here.
    No ZFS native encryption is bespoke junk. No point to Canonical integrating it as ZoL does not have FIPS 140-2 certification on its encryption. Redhat method is using all FIPS 140-2 certified stuff.

    This is one of the big problems with ZFS these days is ZoL developers are not like the Sun ones back in the day. The Sun developers back in the day did all the FIPS required certifications and other encryption certifications. But time moves on what was FIPS certified in the past is not FIPS certified now.


    Originally posted by k1e0x View Post
    The UID/GID problems aren't really a Unix problem.. It's more caused by a lack of a well functioning directory server for heterogeneous environments. (Or reliance on AD) This is a solvable problem but it's a very non-sexy one so people tend not to work on it.
    No it is in fact a Unix problem that Linux caught. UID/GID on disc on a file and in memory on a file with historic unixs have to be the same value. Samba does insane amount of abstraction go get UID/GID miss alignment to work.

    Life would be so many times simpler if we could say this directory has miss aligned UID/GID when reading/writing to the directories meta data use the UID/GID conversion table.

    This is not a directory server problem in fact using a directory server does not in fact fix it. One of the problem directory servers run into is UID/GID in directory conflicting with locally installed service. So the ability for a client computers to use different UID/GID values in memory for same user yet over the wire or on disc the user appears to be the 1 UID/GID value would be a god send to directory servers on Linux.

    Originally posted by k1e0x View Post
    I also think it's funny he didn't want to talk about ZFS. hehehe Yes, "that's *just* a filesystem" (and it kicks our ass so we aren't going to talk about that.)
    Really him not wanting to talk about ZFS that much is because that is just a filesystem was in fact a statement of all of ZFS that is usable with ZoL due to lack of required certifications. What the point of going into ZFS encryption when that encryption path is not certified so not usable.

    Really funny that you took what he said the wrong way. Instead of waking up ZFS for Linux has a very long way it need to go you were like since he will not bother talking about the broken parts of ZFS it has to be magically better. Sorry to break your bubble but current ZoL design is busted and not certified the way it should be.

    Leave a comment:


  • k1e0x
    replied
    He has some.. interesting views on systems that I tend to disagree with. Such as read only /etc

    That feels to me like what they want to do is deploy a desktop system (we are already in odd territory because Linux isn't primarily a desktop..) But.. they want to deploy a desktop with a standard default image on it and do all the user management through the home directory. (Presumably they feel the system would be managed through Ansible? The more complicated they can make that the more money they send to RedHat through licences and it hurts their competition such as SaltStack and Puppet. ) It's also highly single user. It kind of worries me they want this approach and I feel this is very reminiscent of a Windows style registry.

    That feels really terrible to me as a sysadmin.. I would propose that /etc is the place where system configuration lives on Unix, that is the purpose of this directory. It is not for state information. /etc should always be writable for a sysadmin and the configuration files should be in clear text. Configuration systems like the Windows registry are big reasons people use Linux in the first place. Sysadmins like SIMPLE configuration.. such as on FreeBSD.

    Again technologies already exist to solve these (somewhat bizarre) use cases. They aren't however well integrated. Canonical are you listing? Get GDM working with ZFS native encryption and you'll have a better solution than Redhat here. Not only will you be portable, you'll also be atomic and incremental. Going back and forth between two "portable" home directories RedHat would have to move that entire home directory around or do huge difs.. you don't on ZFS.

    The UID/GID problems aren't really a Unix problem.. It's more caused by a lack of a well functioning directory server for heterogeneous environments. (Or reliance on AD) This is a solvable problem but it's a very non-sexy one so people tend not to work on it.

    I also think it's funny he didn't want to talk about ZFS. hehehe Yes, "that's *just* a filesystem" (and it kicks our ass so we aren't going to talk about that.)
    Last edited by k1e0x; 11 February 2020, 03:00 PM.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by k1e0x View Post
    Redhat's addicted to dbus. They get off on it I think.

    Yeah this entire thing is stupid.. it's a solution looking for a problem.
    dbus goes back to the fact you could not send signals dependably to the right process under Linux.


    Originally posted by k1e0x View Post
    ZFS can do the encryption with a script btw.. and it doesn't need a loopback device and+and has pooled storage so it doesn't have limits on space & and & can actually compress encrypted files on the fly.. and..
    And is insecure garbage design that not designed it work with the up coming Linux kernel address space isolation and need to be rewritten to use the page cache and other things. Yes ZFS design also runs into issues on the epic processes.

    Originally posted by k1e0x View Post
    He also talks a lot about resource limits, etc shadow problems, not being extendable etc .. yeah well.. that is very true and some OS's solved that.. man login.conf
    Watch it again and work out your are being a fool here. He is talking about define resource limits on a portable home directory. Can that .login.conf cope with home directory used on two different systems requiring different permission set the answer is no. The homed .identity the answer is yes.. So there is a problem freebsd has not solved yet.

    Leave a comment:


  • k1e0x
    replied
    Originally posted by halo9en View Post
    I can't wait to have arbitrary code execution vulnerabilities all over my system! https://security.archlinux.org/CVE-2020-1712
    Redhat's addicted to dbus. They get off on it I think.

    Yeah this entire thing is stupid.. it's a solution looking for a problem.

    ZFS can do the encryption with a script btw.. and it doesn't need a loopback device and+and has pooled storage so it doesn't have limits on space & and & can actually compress encrypted files on the fly.. and..

    He also talks a lot about resource limits, etc shadow problems, not being extendable etc .. yeah well.. that is very true and some OS's solved that.. man login.conf

    Not sure why Redhat keeps him employed..
    Last edited by k1e0x; 10 February 2020, 09:07 PM.

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by halo9en View Post
    I can't wait to have arbitrary code execution vulnerabilities all over my system! https://security.archlinux.org/CVE-2020-1712
    So you will never ever use any piece of software again I suppose.

    Leave a comment:


  • halo9en
    replied
    I can't wait to have arbitrary code execution vulnerabilities all over my system! https://security.archlinux.org/CVE-2020-1712

    Leave a comment:


  • Zucca
    replied
    Originally posted by wizard69 View Post
    ive been using Linux long enough to realize what happens to software where the developer can’t commit himself to the project. There are thousands of projects that died over the years because someone priority has changed. That might be mouths to feed, kids to cloth or even a desire to get back In touch with nature.

    when your job is in fact software development it is far easier to Shepard a project than some noob donating his time.
    Exactly.
    Time is valuable. When you're paid to code it gets done.

    Leave a comment:


  • Ananace
    replied
    Originally posted by intelfx View Post
    Thanks for the explanation, Ananace. Tying encryption key lifetime to the power state changes or the idle timer is pretty close to optimal.

    Do you know, on an off-chance, are they going to use the idle timer directly or just the session lock status?
    No clue to be honest, though I wouldn't be surprised if the first version just uses systemd service triggers for the locking.

    Leave a comment:

Working...
X