Announcement

Collapse
No announcement yet.

Systemd 240 Released To End 2018 On A High Note

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by bnolsen View Post

    Come join us who run voidlinux. My slowest spinning disk systems all boot to login in 13s or less. The nvme one goes all the way to desktop in about 8s. I want an init system, not a windows/osx system clone.
    Do you realize executing systemd adds 0,5 to 1,5 seconds after booting the kernel? It's like 5% of the total boot time.

    Comment


    • #52
      Originally posted by hreindl View Post

      no!

      Fedora dist-upgrades are most time a no-brainer compared with LTS distributions because you have only one or two if at all invasive changes to deal with
      try to upgrade CentOS6 to Centos7 without re-install

      the mchines around me where installed with Fedora 9 and now running Fedora 28
      that are 19 dist-upgrades and switch to grub2, dracut, UsrMove, httpd-2.4 as well as systemd was no fun

      but with long cycles you have all that fun combined at the same time

      TLS distributions are fine when your needs don't change
      wait 5 years and then try to build some recent software on RHEL8
      I agree that long cycle upgrades are by definition much more disruptive. But by the same token, the upgrade itself is generally much more thoroughly tested and the LTS release contains stable and well tested versions of its software components (with infamous notable exceptions, of course). Six-monthly releases are a step up from rolling distros in terms of reliability, but not that much really. I personally think that a one-year release cycle would be a good compromise.

      Comment


      • #53
        Originally posted by hreindl View Post

        besides that you talk to a *non systemd-user* the 0.5 to 1.5 seconds are pure bullshit - or how to you explain me that i have systemd systems where you lose only 5-10 pings when you reboot them?even if systemd would take .0.5 seconds (which is proven wrong) how does that matter when you have parallell start of services (the faster your disks the bigger the benefit) instead braindead initscripts waitnbg for unrelated stuff most of the time
        You can't really compare one system to another. Some systemd systems are virtual machines or containers. What can I say. I run several desktop systems and the boot time for 'userspace' in systemd-analyze is typically 0.5 to 1.5 seconds. It might take longer if you have more services. My point was, if the total boot time is 13 seconds, a possible scenario here is 5 seconds firmware + 0.1s loader + 4s kernel + 4s userspace. The critical chain using systemd might take 1.5 seconds instead. The role of the init system is quite small compared to the whole boot time.

        Comment


        • #54
          Originally posted by hreindl View Post

          your main problem is that you read systemd-analyze unreflected

          [[email protected]:~]$ systemd-analyze
          Startup finished in 2.050s (kernel) + 1.011s (initrd) + 12.895s (userspace) = 15.957s

          10.972s clamd-sa.service
          3.063s network-up.service
          2.766s network-wan-dhcp.service
          1.455s mysqld-dbmail.service
          1.198s clamd.service
          1.037s mysqld.service

          no, it did not take 15.9 seconds, the machine was fully operational, the clamd-instance for spamassassin is other than the one for clamav-milter not mandatory, in fact the system looses 5 pinges due reboot, 0.5 to 1.5 seconds for system ditself are pure bullshit and whithout systemd-analyze you don#t even have numbers so what do you compare with to begin with?

          userspace is the summary of *all* services while many of them are fired up parallel and so you MUST NOT blindly add the numbers from "systemd-analyze blame" and scream "the whole boot took that long" nor did it go faster with sysvinit missing any parallelism at all - in other words when you compare earth with venus and abse your conclusions on that the result is always bullshit
          I'm fully aware how systemd works. This guy I replied to used Void Linux, a desktop distro. I seriously wouldn't use (Void's) runit for servers. If his 13 (hdd) and 8 (ssd) seconds referred to the total boot time from the press of the power button to the login prompt, it's not really that impressive. UEFI machines can cold boot to sddm in less than 5 seconds (ssd) using systemd. Of course that assumes a small number of services or some careful tuning. Half of the time is spent on 'firmware' according to systemd.

          My point was, on my *desktop* systems, systemd can be really fast even if you count the time it took to launch all the services. I don't run antivirus scanners or servers on my desktops and my dhcp client is faster. The systemd-analyze critical-chain shows that launching the graphical.target explain all the delays. That's also everything that's needed to launch the login prompt. There's very little parallelism involved with such lightweight systems and the my only argument was that with such low number of services, I'm not sure if the init system plays any significant role in the total boot time.

          Comment


          • #55
            Originally posted by hreindl View Post

            why did you then write bullshit like "Do you realize executing systemd adds 0,5 to 1,5 seconds after booting the kernel"
            That's how long it takes to launch sddm on my systems, like I think I said (after the kernel has booted).

            Comment


            • #56
              Originally posted by starshipeleven View Post

              If they add that you would have to edit systemd service files of that application, and start it through systemd somehow, as it would be a sandboxing-like thing (just as it is for Android), not a firewall in the proper sense of the word.

              Firewalls that actually inspect traffic and block applications regardless of ports and IPs are called "application firewall" but require significant amounts of resources as they are inspecting all packets and keeping track of what is going on.

              Currently the most similar thing is "portals" feature of flatpack where you can give flatpacked applications that are otherwise sandboxed permission to access something when they ask for it (similar to Android), but it's still under development.
              I don't know what to say.
              I found out that there is:
              On Windows:
              Glasswire
              https://www.glasswire.com/
              On Android
              AFWall+
              https://f-droid.org/en/packages/dev.ukanth.ufirewall/
              That I think they get into the category of "application firewall". Thanks for the name BTW!

              So, seeing that there are already two applications that can do this, it seems to me that it can't be so incredibly complicated to create one of these firewalls.
              Even though AFWall+ is designed for Android OS, Android is still mostly Linux and AfWall+ says it's just a front-end to iptables.

              Since AFWall+ is open source, I think someone could just look at the source code to see how it's it's implemented and port it to Linux.

              I don't know about requiring a significant amount of resources to track everything. I've been running it on my Android phone for about 2-3 years and I could not see any Battery manager notification saying that this app is draining too much from the battery.
              So, I'm thinking that if it's efficient enough to work on my mobile phone, then it must be good for laptop or desktop too.

              Comment


              • #57
                Originally posted by hreindl View Post

                a frequently misunderstanding of "stable" in this context is that this does not mean "runs stable and has no bugs in your usecases", in doubt it means it don't change the behavior even if it's plain wrong up to https://bugs.debian.org/cgi-bin/bugr...bug=819703#158 (Jamie Zawinski is the upstream developer of xscreensaver) and so you can be sure that you don't get rid of nasty bugs for a long time until they are security bugs and someone realizes that it's a security bug at all

                a few years ago some glibc vulerability became famous and was fixed left and right over all distributions, the point was that originally it was just some bugfix and nobody noticed it's a security issue, so nobody flagged it for distributions doing cherry picking to backport - Fedora was not affected because the Fedora version with that glibc release was EOL in the meantime and the new version contained the bugfix

                honestly i don't want to know how many similar bugfixes never got backported because nobody was aware that it should have been and the other side of this backporting and touch code the package maintainer don't understand well enough was the famous openssl bug in debian - the whole concept of "we don't ever to minor updates but cherry pick commits from upstream because we know better than upstream" is flawed and likely introduces new bugs never existed upstream at all by overseen the context of another commit a few days ago calling a sanitize function on a different place the backported snippet as it was written assumes that it got called because it's simply part of the software and no release exists which just contains half of the changes

                frankly i reported way too much RHEL bugs as CentOS user at the Redhat bugtracker and it took months to get it fixed

                my shutdown service on our NFS server contains "/usr/bin/systemctl disable avahi-daemon.service avahi-daemon.socket rpcbind.socket" because every fucking CentOS over two years update enables "rpcbind.socket" unasked, the concept of that machine is "do a baisc boot from sd-card and fire up a login script asking for the LUKS password to unlock the storage, mount it and after that fire up any storage relevant service", the bug was reported for Fedora and still exists in CentOS 7.6 - sorry but you don't get me with "stable"

                PHP is a good example where backporting don't work properly with an attitude of "If an attacker is able to crash PHP within this context, the application is vulnerable to SQL injection, and this is the fault of the application, not of PHP. If the application uses prepared statements with bound variables only then it is not vulnerable to this kind of attack" and nobody cares about "there is still a difference between vulerable to sql-injection and be able to crash the service" and so the issue is not flagged as security fix and never makes it into LTS distributions

                and finally my main problem with that "we backport to death and don't raise version numbers"
                Tutuapp 9apps Showbox -bullshit is that it makes it hard to impossible find out if some nasty stuff was properly fixed even when you follow upstream closely

                BTW: https://bugs.debian.org/cgi-bin/bugr...bug=819703#158 is amazing to read - so much wasted time because of an ignorant "we released Debian with exactly that version and know better then upstream and so we dicks don't to point-updates which won't waste time for everybody involved"

                yes it's true PHP does not work properly with an attitude of "If an attacker manages to crash PHP in this context
                thank you for the good answer

                Comment


                • #58
                  Originally posted by hreindl View Post

                  a frequently misunderstanding of "stable" in this context is that this does not mean "runs stable and has no bugs in your usecases", in doubt it means it don't change the behavior even if it's plain wrong up to https://bugs.debian.org/cgi-bin/bugr...bug=819703#158 (Jamie Zawinski is the upstream developer of xscreensaver) and so you can be sure that you don't get rid of nasty bugs for a long time until they are security bugs and someone realizes that it's a security bug at all

                  a few years ago some glibc vulerability became famous and was fixed left and right over all distributions, the point was that originally it was just some bugfix and nobody noticed it's a security issue, so nobody flagged it for distributions doing cherry picking to backport - Fedora was not affected because the Fedora version with that glibc release was EOL in the meantime and the new version contained the bugfix

                  honestly i don't want to know how many similar bugfixes never got backported because nobody was aware that it should have been and the other side of this backporting and touch code the package maintainer don't understand well enough was the famous openssl bug in debian - the whole concept of "we don't ever to minor updates but cherry pick commits from upstream because we know better than upstream" is flawed and likely introduces new bugs never existed upstream at all by overseen the context of another commit a few days ago calling a sanitize function on a different place the backported snippet as it was written assumes that it got called because it's simply part of the software and no release exists which just contains half of the changes

                  frankly i reported way too much RHEL bugs as CentOS user at the Redhat bugtracker and it took months to get it fixed

                  my shutdown service on our NFS server contains "/usr/bin/systemctl disable avahi-daemon.service avahi-daemon.socket rpcbind.socket" because every fucking CentOS over two years update enables "rpcbind.socket" unasked, the concept of that machine is "do a baisc boot from sd-card and fire up a login script asking for the LUKS password to unlock the storage, mount it and after that fire up any storage relevant service", the bug was reported for Fedora and still exists in CentOS 7.6 - sorry but you don't get me with "stable"

                  PHP is a good example where backporting don't work properly with an attitude of "If an attacker is able to crash PHP within this context, the application is vulnerable to SQL injection, and this is the fault of the application, not of PHP. If the application uses prepared statements with bound variables only then it is not vulnerable to this kind of attack" and nobody cares about "there is still a difference between vulerable to sql-injection and be able to crash the service" and so the issue is not flagged as security fix and never makes it into LTS distributions

                  and finally my main problem with that "we backport to death and don't raise version numbers"-bullshit is that it makes it hard to impossible find out if some nasty stuff was properly fixed even when you follow upstream closely
                  9appscartoon hdvidmate
                  BTW: https://bugs.debian.org/cgi-bin/bugr...bug=819703#158 is amazing to read - so much wasted time because of an ignorant "we released Debian with exactly that version and know better then upstream and so we dicks don't to point-updates which won't waste time for everybody involved"
                  Thanks sharing information, it was really helpful!

                  Comment


                  • #59
                    Originally posted by 144Hz View Post
                    That’s an impressive amount of features and developers.
                    Well very much impressed by latest features and development.I just hope it is a journey and not an destination.Waiting more of it myschoolbucks

                    Comment

                    Working...
                    X