Announcement

Collapse
No announcement yet.

Systemd In Ten Years Has Redefined The Linux Landscape

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oiaohm
    replied
    Originally posted by ALRBP View Post
    Something I could have done with ONE script needs several services files, with dependencies between them, to be done with systemd. Yes, this is more complicated than just one script/service.
    I have done it to prove a point doing a full desktop Linux system with 1 systemd service file. Debugging it if something goes wrong is a nightmare from hell but it works.

    Originally posted by ALRBP View Post
    First, I KNOW how to operated systemd.
    Not exactly you have been taught the recommend way to use systemd. You don't know how to push system to the absolute limit so you really don't know how to fully operate systemd at all. Other than compare operations what you can do in 1 script you can in fact do in 1 service file.

    https://www.freedesktop.org/software...d.service.html The command line section here kind of suggest it.
    service files are allows as many line of Exec*= stuff as you need. Yes * being a wild card for start, stop,prestart and so on. I think is was 2000 ExecStart= lines in that demo service file.

    Basically what you did as 1 script you could do as a single service file under systemd. Its just not highly recommend because it makes debugging issues harder.

    Yes this is another one of these false ideas about systemd. Systemd suggest you do something and attempts to convince you to go in that direction because long term it will be better for you. If you stubborn you don't have to.

    Yes I did that full Linux desktop start up using a single systemd service file. I do mean single as a demo to boss I was sick of hearing your arguement from. He was very very much O so that in fact works. Not recommend but the story that you need to use multi service files is crap. Using multi service files is recommend because it makes logging cleaner and diagnosing individual failed parts simpler.

    What going to be the next anti-system garbage arguement.


    Leave a comment:


  • rtfazeberdee
    replied
    Originally posted by Britoid View Post


    shell script = potentially 100s of lines of shell code calling different programs to try get the same thing.

    I know which one I prefer.
    plus the shell script is not guaranteed to run on any other distribution without modification which i think is a major benefit for config files.

    Leave a comment:


  • rtfazeberdee
    replied
    Originally posted by Paul Frederick View Post

    This isn't a court of law numb nuts. I don't use that DE anymore because it didn't work. SystemD made me fall back to how I ran Linux 20 years ago. How's that for progress?
    that says more about you than anything else i.e. workman blaming the tools .

    Leave a comment:


  • Britoid
    replied
    Originally posted by ALRBP View Post
    Something I could have done with ONE script needs several services files, with dependencies between them, to be done with systemd. Yes, this is more complicated than just one script/service.

    Just because it's only one file doesn't mean it's less complicated, we don't count complexity by file count.

    systemd = a few small files describing each service and their relation to eachother, systemd handles making sure they start up in the correct order
    shell script = potentially 100s of lines of shell code calling different programs to try get the same thing.

    I know which one I prefer.

    Leave a comment:


  • ALRBP
    replied
    Originally posted by arokh View Post

    This is ridiculous, here we have one more guy that just don't know how to read the manual and blame it on systemd. You realize that keeping services and it's dependencies running is it's ace right? Less convenient when doing something "complicated"? Incredible.
    Something I could have done with ONE script needs several services files, with dependencies between them, to be done with systemd. Yes, this is more complicated than just one script/service.

    Originally posted by arokh View Post
    Your opinion on something you are unable to even operate isn't that interesting, it's like listening to someone who can't drive a car explain about car engines.
    First, I KNOW how to operated systemd. Second, if you want an opinion on car engines, you better talk with an engineer that never learned to drive than with the random car driver who do not even know what "thermodynamics" means. So your example is bad.

    The other thing is that the opinion I am giving is about systemd as a whole, not as an init system. The fact is that systemd is not (only) an init system. An init system has nothing to do with user sessions, devices nor bootloading.

    @starshipeleven
    A set of command line tools vs something that a nearly a whole OS, are you serious?

    Leave a comment:


  • oiaohm
    replied
    Originally posted by archsway View Post
    Databases never need to be started after a webserver, so all you have to do is make sure all databases are started before web servers.
    This is a logical trap and a half. So a database you are not currently using in production someone spun up for testing for some reason now results in your webserver not starting???

    Originally posted by archsway View Post
    Even if your webserver isn't using the database you set up (what else would use it, though?) all that would mean is that is takes a couple of seconds extra for the webserver to start up, which is unlikely to be a problem as network connection is more likely to be the bottleneck.
    Funny right that couple of extra seconds its not. It basically forever your webserver is not starting because some database that it did not depend on is now broken because of what you suggested the web-server is never starting.

    Databases can be used for quite a few things other than webserver. You might have remote syslogs coming in and being stored in a database. So that database has got overfull and died had a management interface on the web-server now the web-server not starting. Brilliant idea.

    https://qgis.org/en/site/
    You might have a postgresql database on the server for your qgis desktop users to use and your webservers sites be php and mysql. So postgresql does not start in this case the webserver should have started as it not using that database.

    qgis backend database being postgresql with postgis database you can be you can be talking about a database where you need 400TB of storage and can generate a hell load of memory and cpu pressure so you really do want to shut that down when no one is using it this is one of the fun cases when you can have start on demand database.

    Another option is gnucash storing it data in postgresql or mysql. Basically there is a long list of applications that are not webservers that inside business maybe using a database backend that happens to be stored on a server running a webserver as well. That webserver might be the system management front end with no direction connection to those databases being up.

    You might be spinning up like two instances of postgresql one for the qgis/gnucash/some application stuff and one for the webservers both on the same box. Of course the webserver only need to not start when the one for webservers does not start.

    Also the idea that databases never need to be started after webservers is also wrong. Think that you accountancy staff are only in 2 days a week.
    https://wiki.archlinux.org/index.php/Systemd/Timers
    Fun feature of systemd timers that you could have databases that are enabled that only run like 2 days a week when the staff are their who need it. So what in a case like this webserver should only run 2 days a week as well when you have other staff there 7 days a week needing to access it???

    Basically this is not as simple of a problem as you think it is. As you get more experience you will learn not everything is about the web-server when it comes to what databases are up to. Not everything in a business need to be running at the same time.

    Heck you might assign a server multi IP address and be running multi httpd servers each depending on different database instances to be up that only need to run particular days of week. If one has a broken database instance for some reason there is no reason for the other to fail as well. You stupid configuration idea would cause that.

    The databases the web-server instance depends on need to be up before the instance but the databases that instance does not depended on absolutely does not need to be up before the web-server instance. That the problem systemd can only know that if you set that in service unit files.

    Basically I can write up tones more examples where you can have a webserver and databases on the same box. With many of those databases data they are hosting have nothing todo with the websites the webserver is providing.

    Leave a comment:


  • archsway
    replied
    Originally posted by oiaohm View Post
    Webserver + a database is not a bog simple solution to setup right. Because database used with webserver could be postgresql, mysql, mariadb..... and the list goes on. Webserver + database means you have to manually alter the init system so it does the right things and its not possible for the distribution to set this out the box right. Yes just because you install a database does not mean the webserver will be using it either. This was in fact true back with sysvinit as well this is not a new problem. Systemd has a different of doing this compared to old sysvinit.
    Databases never need to be started after a webserver, so all you have to do is make sure all databases are started before web servers.

    Even if your webserver isn't using the database you set up (what else would use it, though?) all that would mean is that is takes a couple of seconds extra for the webserver to start up, which is unlikely to be a problem as network connection is more likely to be the bottleneck.

    Any patch implementing that would be shot down by system-D developers as "Not a bug" though, so that will never actually happen.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by k1e0x View Post
    Really? Would you like to be specific?

    What case exists where jails or zones do not provide exactly the same performance values as the host at the same time isolating the process? The implementations I know are not perfect.. but they are better than the upstack illusion of a container Linux provides. Linux needs to start from scratch and implement this properly.
    https://aboutthebsds.wordpress.com/2...curity-danger/
    I stay well clear of talking about freebsd jails the above post covers why. This covers this history freebsd jails core design does not come from a trustworthy coder as well as performing badly. Performance problem cause lot of excess isolation.

    Solaris Zones and Linux cgroups/namespaces suffer from basically the same set of problems.

    1) The first problem type I can point to a recent example of it in Linux.
    https://lkml.org/lkml/2019/9/5/1132
    This is example were full isolation can turn rapidly into a hindrance this was one of the cgroup goof ups this is a particular class of goof up resulting in extra memory usage. But Solaris Zones have in their implementation many goof ups like this.

    This type goof ups fairly much look the same you isolate process and effectively duplicate the memory incorrectly for some reason. This duplicate memory means you memory management has to work harder to defragment memory. Not being able to get large continuous allocations of memory starts effecting IO performance. At this point your performance disappears into hell.

    k1e0x this is really the board game I talked about. Perfect security you are going to duplicate up the memory so that some missing memory protection flag is not going to allow a cross breach but doing this undermines stability and performance. So that deduplication fix in the Linux kernel for slabs is incorrect for perfect security. Maybe we want configuration here why this may not be the answer is 3.

    2) Then you have like the Linux PID/network... namespace or the Solaris Non-Global Zones problem. This is like the first problem with a extra side of hell. The applications in these namespace/zones have to presented with information that looks like the a full system even that they are only seeing part information this is mandatory memory duplication that may come back and bite. This is duplication also has to be kept synced in many cases. This syncing takes extra cpu time.

    The second one if your workload is hitting it can be faster to run you workload in kvm instead. Again maybe we want configuration here so we can avoid using these things when they make no security sense.

    So it really hard working out how to do cgroups and zones exactly right. Get it wrong you can have massive performance hits that appear absolutely random.

    Linux kernel did start over with cgroups once already why we have cgroupsv1 and cgroupsv2. Cgroups v2 is way better designed than the first one. Cgroupv1 broke apart zone design way to far allowing multi trees. But allowing users to use of the namespaces when they are not required that cgroups allow gives it performance advantage over zones.

    Container on Linux is a theory construct built on top of cgroups and namespaces.

    .Basically k1e0x there is no single absolutely right answer for every usage case. So for this stuff we need a stack of setting right.

    3) Welcome to the third nightmare. As you add options for configuring the system you add cpu overhead possible to the complete system as you need to check what options apply. Something Solaris managed todo. leading to it being nicknamed Slowaris.

    Basically this is one very hard game to win. Every path you can think is a solution to the zones or cgroups/namespace problem can in fact end up killing performance or security or both with a extra side of sometimes completely screwed up stability.

    Problem is a perfect implementation cgroups/namespaces and zones for security will be slow. Redox OS cannot avoid this. So you need to make a imperfect solution to have performance the problem is how to achieve imperfection for performance without reducing security too much. Basically we do not want to do a intel with speculative execution.
    Last edited by oiaohm; 21 December 2019, 02:59 AM.

    Leave a comment:


  • k1e0x
    replied
    Originally posted by oiaohm View Post

    One of the funny parts here cgroups/namespace in Linux kernel starts from the Solaris Zones as attempt to simply and reduce the overhead cost. This has been fairly successful with some major goof ups. Not all times do you need full process isolation and containerisation sometimes this is in fact hindrance memory usage and performance.
    Really? Would you like to be specific?

    What case exists where jails or zones do not provide exactly the same performance values as the host at the same time isolating the process? The implementations I know are not perfect.. but they are better than the upstack illusion of a container Linux provides. Linux needs to start from scratch and implement this properly.
    Last edited by k1e0x; 21 December 2019, 12:17 AM.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by k1e0x View Post
    Yes. I agree with a lot of that. I think Linux cgroups and namespace is part of the problem. That needs to be re-engineered honestly a lot more like Solaris Zones (something that is the model for Redox) and true process isolation and containerization.
    One of the funny parts here cgroups/namespace in Linux kernel starts from the Solaris Zones as attempt to simply and reduce the overhead cost. This has been fairly successful with some major goof ups. Not all times do you need full process isolation and containerisation sometimes this is in fact hindrance memory usage and performance.

    Originally posted by k1e0x View Post
    Determining if a pid has gone down, needs to go down, or has been reused is a much larger topic and a philosophical one.
    There is a practical one as well when it comes to resource usage and making sure you don't have risk of pointer overflow. Or simply avoid having PID value at all plan 9 os did this yes first OS that truly did everything as a file.

    Originally posted by k1e0x View Post
    I find I'm often of the opinion that if a process dies it should not be restarted, it should core dump and be investigated to find out why it went down and fixed as it isn't operating as intended and therefore needs a fix.
    This is generally what systemd does. Exactly what user Farmer is running into.

    Originally posted by k1e0x View Post
    Restarting it is very "windows sysadmin esque" and that bothers me a lot that Linux seems to think this is ok.
    Auto restarting services predates Windows and first appears on different Unix systems. Lot of people link this to windows. You had people running Linux back in 2001 using deamontools that would also auto restart services. Before that you had cron jobs from the Unix world ported world like every 15 mins perform a status on a service if service not up restart it. All of these could go horrible wrong.

    This is the problem restarting services is in fact very Unix sysadmin. So we need to provide way todo it well.

    Originally posted by k1e0x View Post
    There are some things where this behaviour is ok, necessary and good.. but those are edge cases and should not be the norm.. In my opinion.
    Edge cases include remote management like you ssh you don't want that dead. Third party print drivers that you cannot fix as well. So auto restarting services is something we kind of need and that it works well. Not have all the past where it would go horrible wrong and not restart when it is need to because a fragment of the service is left running.

    Originally posted by k1e0x View Post
    I think the moral of this story is OS development and improvement still has a long way to come.
    I agree OS development still has a long way to come. Problem is there is no exact black and white answer.
    Draw a triangle(fairly huge by the way most likely read this all before drawing). At each of the three corners put the following words.
    1) Fast
    2) Secure
    3) Stable

    Now in the dead centre of triangle put a point with the word garbage. Draw 3 lines to the 3 points of the triangle with value starting at the centre being 0 and 100 at the corners. You now get 3 tokens. You have 200 point to spend to move those 3 tokens from the centre to the out side. That 0 to 100 is percentage of perfection.

    Welcome to the game of compromises. I have done as a game before instead of 100 used 0 to 6 with max of 12 point assignable. Very warped game of snakes and

    The outer triangle you draw would be ideal perfection that is totally not achievable. Playing with 200 points is basically the real world.

    You are now looking at a picture OS choices when making an OS. The closer you get to fast. The less secure and stable you are. The closer stable the less secure and fast you are. Closer to secure the less stable and fast you are.

    Lot of people would think stable would get better inline with secure. Remember you have run all you tests to prove something is stable now you have to apply a security patch so you have to run all those test again. See treadmill.

    Also some security event might happen as gr-security was known for doing was to trigger a kernel panic as that was the most secure thing todo this was not what you want something stable todo instead you want it to safely shutdown instead.

    Game of compromises I have done as a board game before instead of 0-100 used 0 to 6 with max of 12 point assignable. Very warped game.

    Speed of movement capped by the fast you set so if fast is set to . So you absolutely cannot win if you max out stable and secure you are not moving..

    The board is a simple chess board moving like a snakes and ladders board. Black is security issue, White is a stability issue. The dice roll that moves you to a square is greater than you set security on a black square or set stability on white square back to start you are. At start point off board you can change your 3 values of fast, stable and secure of course not breaking the 12 point limit. For such simple rules its insanely hard game to get to end. Yes this is a game when rolling a 6 is fairly much screwed.

    Leave a comment:

Working...
X