Announcement

Collapse
No announcement yet.

Reworked x86_64 Parallel Boot Support Posted For The Linux Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Reworked x86_64 Parallel Boot Support Posted For The Linux Kernel

    Phoronix: Reworked x86_64 Parallel Boot Support Posted For The Linux Kernel

    Being worked on for a while has been Linux kernel patches to speed boot times by allowing the parallel bring-up of CPU cores. There were AMD boot issues since worked around and the patches gone through multiple revisions for helping with Linux kernel boot times. Those patches continue to be improved upon and yesterday saw a reworked patch series posted...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Now if only we could get a sped up shutdown time.

    Comment


    • #3
      Originally posted by Lanz View Post
      Now if only we could get a sped up shutdown time.
      My experience, the problem with shutdown time is systemd blocking on trying to shut down NetworkManager (or a couple of other misbehaving network daemons) which regularly misbehaves. A 90 second timeout for each is ridiculous.

      Comment


      • #4
        Originally posted by stormcrow View Post

        My experience, the problem with shutdown time is systemd blocking on trying to shut down NetworkManager (or a couple of other misbehaving network daemons) which regularly misbehaves. A 90 second timeout for each is ridiculous.
        Isn't their goal to gradually decrease this timeout? It used to be longer. Next, NM would need to terminate in 60 seconds, then 45, then 30, ... this is best for both. Future systemd versions will be a bit faster. Also NM authors don't need to learn how to properly implement their application just yet.

        Comment


        • #5
          Originally posted by caligula View Post

          Isn't their goal to gradually decrease this timeout? It used to be longer. Next, NM would need to terminate in 60 seconds, then 45, then 30, ... this is best for both. Future systemd versions will be a bit faster. Also NM authors don't need to learn how to properly implement their application just yet.
          The systemd timeout is not the issue at all , it is just a consequence of things not behaving as they should which is what should be fixed in the first place. Besides, a hardcoded timeout value is not always such a good idea either.

          For example:
          If a process is eating 100% CPU time and the memory usage is increasing , then any timeout may be small.
          If a process is eating 100% CPU time and the memory usage is DEcreasing, then any timeout should be high.
          If a process is NOT using 100% CPU time (or even >0%) then wait nuking it until other processes have been killed/timed out.

          Of course the silly above examples adds lots of complexity , and needs to be a lot more refined , but hardcoded timeouts is a nasty idea in my opinion.

          http://www.dirtcellar.net

          Comment


          • #6
            The current timeout is already aggressive, especially on spinning rust machines or servers. It can take dozens of seconds to properly shut down a database server, for instance.

            Comment


            • #7
              Do competent sysadmins really shutdown or reboot their machines without properly shutting down the main application in the first place?

              Comment


              • #8
                Originally posted by smotad View Post
                Do competent sysadmins really shutdown or reboot their machines without properly shutting down the main application in the first place?
                Yes. Sometimes, even automatically.

                Comment


                • #9
                  Then maybe I'm too paranoid about (critical/customer) data consistency.

                  Comment


                  • #10
                    Originally posted by smotad View Post
                    Then maybe I'm too paranoid about (critical/customer) data consistency.
                    I would guess that you've just not been responsible for enough servers at one time.
                    If you need to shut down an entire rack or row of servers, you're not logging into every single one of them and manually doing anything.

                    Comment

                    Working...
                    X