Announcement

Collapse
No announcement yet.

New Linux CPU Hot-Plugging Works Out "Nightmare"

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • New Linux CPU Hot-Plugging Works Out "Nightmare"

    Phoronix: New Linux CPU Hot-Plugging Works Out "Nightmare"

    The current Linux kernel CPU hot-plugging support has been described as "an increasing nightmare full of races and undocumented behaviour", but fortunately it's in the process of being re-developed...

    http://www.phoronix.com/vr.php?view=MTI4OTQ

  • #2
    Forgive my ignorance, but what exactly is cpu hot-plugging? Is it, as the name implies, installing and removing a cpu while the machine is powered on? If so, I guess this is server-level stuff, I can't see me needing to (or indeed being able to) swap out the cpu on my desktop.

    Comment


    • #3
      Originally posted by kaprikawn View Post
      Forgive my ignorance, but what exactly is cpu hot-plugging? Is it, as the name implies, installing and removing a cpu while the machine is powered on?
      Correct.

      If so, I guess this is server-level stuff, I can't see me needing to (or indeed being able to) swap out the cpu on my desktop.
      Correct.

      Comment


      • #4
        Originally posted by kaprikawn View Post
        Forgive my ignorance, but what exactly is cpu hot-plugging? Is it, as the name implies, installing and removing a cpu while the machine is powered on? If so, I guess this is server-level stuff, I can't see me needing to (or indeed being able to) swap out the cpu on my desktop.
        I guess this potentially also applies to VMs, not just physical machines - does assigning a couple of extra cores to a VM count as CPU hotplugging?

        Comment


        • #5
          I hope i'm wrong but i think Linux kernel developers are forgetting to maintain the quality of code, adding features to the system is important but the quality should come on top of priority.

          Comment


          • #6
            Originally posted by dante View Post
            I hope i'm wrong but i think Linux kernel developers are forgetting to maintain the quality of code, adding features to the system is important but the quality should come on top of priority.
            What you don't like of the current patch-set?

            Comment


            • #7
              Originally posted by dante View Post
              I hope i'm wrong but i think Linux kernel developers are forgetting to maintain the quality of code, adding features to the system is important but the quality should come on top of priority.
              Uh. In short: you're wrong

              Comment


              • #8
                Originally posted by dante View Post
                I hope i'm wrong but i think Linux kernel developers are forgetting to maintain the quality of code, adding features to the system is important but the quality should come on top of priority.
                Uh, isn't that exactly what this patch set is trying to do?

                Comment


                • #9
                  Originally posted by dante View Post
                  I hope i'm wrong but i think Linux kernel developers are forgetting to maintain the quality of code, adding features to the system is important but the quality should come on top of priority.
                  So you are saying Thomas Gleixner should be forced to work on other parts of the linux kernel (which are not a part of his domain) to improve code quality? Or that they should reject his code and only accept code that improves quality on other parts of the kernel, and then accept his code after other parts are good enough?

                  Comment


                  • #10
                    This may be usefull for ARM crowd, where turning off cores can be used for power saving. Assuing hotpluggin CPU's can be used for hotpluggin CPU's cores.

                    Comment


                    • #11
                      Originally posted by dante View Post
                      I hope i'm wrong but i think Linux kernel developers are forgetting to maintain the quality of code, adding features to the system is important but the quality should come on top of priority.
                      Why? Did they reject a patchset of yours that was intended to better quality?

                      Comment


                      • #12
                        Originally posted by dante View Post
                        I hope i'm wrong but i think Linux kernel developers are forgetting to maintain the quality of code, adding features to the system is important but the quality should come on top of priority.
                        My feeling is that the extent of hotplugging we have had has been "hacked" on there incrementally, and this is the reason it is less-than-stellar. The current implementation rewrite is a consequence of this and probably will make the quality go up, not down.

                        Comment


                        • #13
                          Originally posted by kaprikawn View Post
                          Forgive my ignorance, but what exactly is cpu hot-plugging? Is it, as the name implies, installing and removing a cpu while the machine is powered on? If so, I guess this is server-level stuff, I can't see me needing to (or indeed being able to) swap out the cpu on my desktop.
                          It does say to me, if as you say it's aimed at servers, and that does make sense, then why was the current system such a botched, messy, untidy job? I mean, if there's one instance where mutli-cpu support and hotplugging might be useful, it's in servers, supercomputer clusters, and that sort of thing. And that's the kind of territory where linux has tradtionally been the go-to choice, if I recall correctly. It's a tad ironic, is all I'm saying.

                          Comment


                          • #14
                            This sounds like a very common pattern:

                            - subsystem starts simple and grows incrementally
                            - design abstraction which was probably OK for the initial code is not sufficient to deal with growing complexity
                            - developers get together at conference and agree on how it should be done
                            - one developer writes first pass of code following new model
                            - old code tossed over nearest clump of cactus and badmouthed even by the people who wrote it

                            The difference here seems to be that the author of the new code also authored a particularly colourful description of the old code

                            Comment

                            Working...
                            X