Announcement

Collapse
No announcement yet.

Systemd In Ten Years Has Redefined The Linux Landscape

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by k1e0x View Post
    Really? Would you like to be specific?

    What case exists where jails or zones do not provide exactly the same performance values as the host at the same time isolating the process? The implementations I know are not perfect.. but they are better than the upstack illusion of a container Linux provides. Linux needs to start from scratch and implement this properly.
    https://aboutthebsds.wordpress.com/2...curity-danger/
    I stay well clear of talking about freebsd jails the above post covers why. This covers this history freebsd jails core design does not come from a trustworthy coder as well as performing badly. Performance problem cause lot of excess isolation.

    Solaris Zones and Linux cgroups/namespaces suffer from basically the same set of problems.

    1) The first problem type I can point to a recent example of it in Linux.

    This is example were full isolation can turn rapidly into a hindrance this was one of the cgroup goof ups this is a particular class of goof up resulting in extra memory usage. But Solaris Zones have in their implementation many goof ups like this.

    This type goof ups fairly much look the same you isolate process and effectively duplicate the memory incorrectly for some reason. This duplicate memory means you memory management has to work harder to defragment memory. Not being able to get large continuous allocations of memory starts effecting IO performance. At this point your performance disappears into hell.

    k1e0x this is really the board game I talked about. Perfect security you are going to duplicate up the memory so that some missing memory protection flag is not going to allow a cross breach but doing this undermines stability and performance. So that deduplication fix in the Linux kernel for slabs is incorrect for perfect security. Maybe we want configuration here why this may not be the answer is 3.

    2) Then you have like the Linux PID/network... namespace or the Solaris Non-Global Zones problem. This is like the first problem with a extra side of hell. The applications in these namespace/zones have to presented with information that looks like the a full system even that they are only seeing part information this is mandatory memory duplication that may come back and bite. This is duplication also has to be kept synced in many cases. This syncing takes extra cpu time.

    The second one if your workload is hitting it can be faster to run you workload in kvm instead. Again maybe we want configuration here so we can avoid using these things when they make no security sense.

    So it really hard working out how to do cgroups and zones exactly right. Get it wrong you can have massive performance hits that appear absolutely random.

    Linux kernel did start over with cgroups once already why we have cgroupsv1 and cgroupsv2. Cgroups v2 is way better designed than the first one. Cgroupv1 broke apart zone design way to far allowing multi trees. But allowing users to use of the namespaces when they are not required that cgroups allow gives it performance advantage over zones.

    Container on Linux is a theory construct built on top of cgroups and namespaces.

    .Basically k1e0x there is no single absolutely right answer for every usage case. So for this stuff we need a stack of setting right.

    3) Welcome to the third nightmare. As you add options for configuring the system you add cpu overhead possible to the complete system as you need to check what options apply. Something Solaris managed todo. leading to it being nicknamed Slowaris.

    Basically this is one very hard game to win. Every path you can think is a solution to the zones or cgroups/namespace problem can in fact end up killing performance or security or both with a extra side of sometimes completely screwed up stability.

    Problem is a perfect implementation cgroups/namespaces and zones for security will be slow. Redox OS cannot avoid this. So you need to make a imperfect solution to have performance the problem is how to achieve imperfection for performance without reducing security too much. Basically we do not want to do a intel with speculative execution.
    Last edited by oiaohm; 21 December 2019, 02:59 AM.

    Comment


    • #72
      Originally posted by oiaohm View Post
      Webserver + a database is not a bog simple solution to setup right. Because database used with webserver could be postgresql, mysql, mariadb..... and the list goes on. Webserver + database means you have to manually alter the init system so it does the right things and its not possible for the distribution to set this out the box right. Yes just because you install a database does not mean the webserver will be using it either. This was in fact true back with sysvinit as well this is not a new problem. Systemd has a different of doing this compared to old sysvinit.
      Databases never need to be started after a webserver, so all you have to do is make sure all databases are started before web servers.

      Even if your webserver isn't using the database you set up (what else would use it, though?) all that would mean is that is takes a couple of seconds extra for the webserver to start up, which is unlikely to be a problem as network connection is more likely to be the bottleneck.

      Any patch implementing that would be shot down by system-D developers as "Not a bug" though, so that will never actually happen.

      Comment


      • #73
        Originally posted by archsway View Post
        Databases never need to be started after a webserver, so all you have to do is make sure all databases are started before web servers.
        This is a logical trap and a half. So a database you are not currently using in production someone spun up for testing for some reason now results in your webserver not starting???

        Originally posted by archsway View Post
        Even if your webserver isn't using the database you set up (what else would use it, though?) all that would mean is that is takes a couple of seconds extra for the webserver to start up, which is unlikely to be a problem as network connection is more likely to be the bottleneck.
        Funny right that couple of extra seconds its not. It basically forever your webserver is not starting because some database that it did not depend on is now broken because of what you suggested the web-server is never starting.

        Databases can be used for quite a few things other than webserver. You might have remote syslogs coming in and being stored in a database. So that database has got overfull and died had a management interface on the web-server now the web-server not starting. Brilliant idea.


        You might have a postgresql database on the server for your qgis desktop users to use and your webservers sites be php and mysql. So postgresql does not start in this case the webserver should have started as it not using that database.

        qgis backend database being postgresql with postgis database you can be you can be talking about a database where you need 400TB of storage and can generate a hell load of memory and cpu pressure so you really do want to shut that down when no one is using it this is one of the fun cases when you can have start on demand database.

        Another option is gnucash storing it data in postgresql or mysql. Basically there is a long list of applications that are not webservers that inside business maybe using a database backend that happens to be stored on a server running a webserver as well. That webserver might be the system management front end with no direction connection to those databases being up.

        You might be spinning up like two instances of postgresql one for the qgis/gnucash/some application stuff and one for the webservers both on the same box. Of course the webserver only need to not start when the one for webservers does not start.

        Also the idea that databases never need to be started after webservers is also wrong. Think that you accountancy staff are only in 2 days a week.

        Fun feature of systemd timers that you could have databases that are enabled that only run like 2 days a week when the staff are their who need it. So what in a case like this webserver should only run 2 days a week as well when you have other staff there 7 days a week needing to access it???

        Basically this is not as simple of a problem as you think it is. As you get more experience you will learn not everything is about the web-server when it comes to what databases are up to. Not everything in a business need to be running at the same time.

        Heck you might assign a server multi IP address and be running multi httpd servers each depending on different database instances to be up that only need to run particular days of week. If one has a broken database instance for some reason there is no reason for the other to fail as well. You stupid configuration idea would cause that.

        The databases the web-server instance depends on need to be up before the instance but the databases that instance does not depended on absolutely does not need to be up before the web-server instance. That the problem systemd can only know that if you set that in service unit files.

        Basically I can write up tones more examples where you can have a webserver and databases on the same box. With many of those databases data they are hosting have nothing todo with the websites the webserver is providing.

        Comment


        • #74
          Originally posted by arokh View Post

          This is ridiculous, here we have one more guy that just don't know how to read the manual and blame it on systemd. You realize that keeping services and it's dependencies running is it's ace right? Less convenient when doing something "complicated"? Incredible.
          Something I could have done with ONE script needs several services files, with dependencies between them, to be done with systemd. Yes, this is more complicated than just one script/service.

          Originally posted by arokh View Post
          Your opinion on something you are unable to even operate isn't that interesting, it's like listening to someone who can't drive a car explain about car engines.
          First, I KNOW how to operated systemd. Second, if you want an opinion on car engines, you better talk with an engineer that never learned to drive than with the random car driver who do not even know what "thermodynamics" means. So your example is bad.

          The other thing is that the opinion I am giving is about systemd as a whole, not as an init system. The fact is that systemd is not (only) an init system. An init system has nothing to do with user sessions, devices nor bootloading.

          @starshipeleven
          A set of command line tools vs something that a nearly a whole OS, are you serious?

          Comment


          • #75
            Originally posted by ALRBP View Post
            Something I could have done with ONE script needs several services files, with dependencies between them, to be done with systemd. Yes, this is more complicated than just one script/service.

            Just because it's only one file doesn't mean it's less complicated, we don't count complexity by file count.

            systemd = a few small files describing each service and their relation to eachother, systemd handles making sure they start up in the correct order
            shell script = potentially 100s of lines of shell code calling different programs to try get the same thing.

            I know which one I prefer.

            Comment


            • #76
              Originally posted by Paul Frederick View Post

              This isn't a court of law numb nuts. I don't use that DE anymore because it didn't work. SystemD made me fall back to how I ran Linux 20 years ago. How's that for progress?
              that says more about you than anything else i.e. workman blaming the tools .

              Comment


              • #77
                Originally posted by Britoid View Post


                shell script = potentially 100s of lines of shell code calling different programs to try get the same thing.

                I know which one I prefer.
                plus the shell script is not guaranteed to run on any other distribution without modification which i think is a major benefit for config files.

                Comment


                • #78
                  Originally posted by ALRBP View Post
                  Something I could have done with ONE script needs several services files, with dependencies between them, to be done with systemd. Yes, this is more complicated than just one script/service.
                  I have done it to prove a point doing a full desktop Linux system with 1 systemd service file. Debugging it if something goes wrong is a nightmare from hell but it works.

                  Originally posted by ALRBP View Post
                  First, I KNOW how to operated systemd.
                  Not exactly you have been taught the recommend way to use systemd. You don't know how to push system to the absolute limit so you really don't know how to fully operate systemd at all. Other than compare operations what you can do in 1 script you can in fact do in 1 service file.

                  https://www.freedesktop.org/software...d.service.html The command line section here kind of suggest it.
                  service files are allows as many line of Exec*= stuff as you need. Yes * being a wild card for start, stop,prestart and so on. I think is was 2000 ExecStart= lines in that demo service file.

                  Basically what you did as 1 script you could do as a single service file under systemd. Its just not highly recommend because it makes debugging issues harder.

                  Yes this is another one of these false ideas about systemd. Systemd suggest you do something and attempts to convince you to go in that direction because long term it will be better for you. If you stubborn you don't have to.

                  Yes I did that full Linux desktop start up using a single systemd service file. I do mean single as a demo to boss I was sick of hearing your arguement from. He was very very much O so that in fact works. Not recommend but the story that you need to use multi service files is crap. Using multi service files is recommend because it makes logging cleaner and diagnosing individual failed parts simpler.

                  What going to be the next anti-system garbage arguement.


                  Comment


                  • #79
                    So basically what we're dealing with here is an army of newbies that are complaining because their equivalent of autoexec.bat is gone and they are incapable of reading the manual. It seems to me you are better off with a point and click type of OS, but I'll give you a helping hand:

                    Code:
                    [Unit]
                    Description=Noobexec
                    
                    [Service]
                    Type=oneshot
                    ExecStart=/home/newbie/autoexec.bat
                    
                    [Install]
                    WantedBy=multi-user.target
                    There, now you can put your script in there shut up and get off the internet talking about stuff you have no clue about.

                    Comment


                    • #80
                      Originally posted by rtfazeberdee View Post

                      that says more about you than anything else i.e. workman blaming the tools .
                      How does it say anything about me? I don't work on either project. I am just an end user. The tools are simply supposed to work. When they don't there's nothing I can do to fix them either.

                      Comment

                      Working...
                      X