Announcement

Collapse
No announcement yet.

Linux 5.2 To Enable GCC 9's Live-Patching Option, Affecting Performance In Select Cases

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by MaxToTheMax View Post
    But have you considered the possibility that what you consider "good practice" is just a workaround for technological limitations that could be solved with technology rather than accepted without question?
    I'm not talking of technology. I just don't see why you need to open hundreds of programs and tabs to be productive. The limitation here is the man, not the machine.

    What kind of job requires you to constantly edit and read files all over the place at the same time. Is this really a requirement or it's you that keep jumping all over the place and not focusing on a single task. I say this because the people I see that that have this approach (also in real life with books/manuals and stuff and half-finished projects all over the place) and they aren't really efficient.

    Normal jobs require some multi windows but it's less than 10 minutes of setup, not half an hour just for browser tabs.

    Then again I can't say if this is your case or not as I'm not observing you in real life, but from what you said, it surely looks like it.

    If your server really is that important, you should probably get two and reboot one at a time.
    This is not a thing. Many servers for small (but still important) stuff are not run like a cluster because it's a cost none wants to pay.

    You can have backups and all, but any proposal to buy 2 servers when a single server can do the job will simply NOT fly, at all.

    Comment


    • #22
      Originally posted by starshipeleven View Post
      I'm not talking of technology. I just don't see why you need to open hundreds of programs and tabs to be productive. The limitation here is the man, not the machine.
      Well, I used to be reasonably productive on a 1366x768 laptop. Technically you don't NEED more than that, but I wouldn't want to go back to working that way again, and you probably wouldn't like it either.

      What kind of job requires you to constantly edit and read files all over the place at the same time. Is this really a requirement or it's you that keep jumping all over the place and not focusing on a single task. I say this because the people I see that that have this approach (also in real life with books/manuals and stuff and half-finished projects all over the place) and they aren't really efficient.
      Sweeping refactors on a large codebase.

      Normal jobs require some multi windows but it's less than 10 minutes of setup, not half an hour just for browser tabs.
      Even if it WAS just 10 minutes, why spend that time when you can save it? Seems completely needless.

      This is not a thing. Many servers for small (but still important) stuff are not run like a cluster because it's a cost none wants to pay.

      You can have backups and all, but any proposal to buy 2 servers when a single server can do the job will simply NOT fly, at all.
      If it's not important enough to make plans for availability during hardware failure, it's probably not the kind of service where you're planning downtime months in advance.

      Comment


      • #23
        Originally posted by MaxToTheMax View Post

        If your server really is that important, you should probably get two and reboot one at a time. When your PSU blows a capacitor, you'll need the backup anyway. Not that it's wrong to use live patching for this case, far from it. It's just less of a necessity, unless you're skimping on redundancy.
        The mere fact that you have redundancy doesn't change the fact that rebooting major servers is a pain in the back side.
        There's a vast difference (procedure wise) between unplanned outage (in-which you simply switch hosts) and planned outage. In the first case, some interruption of service in expected. In the second the case, you'll be hanged.

        - Gilboa
        oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
        oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
        oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
        Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

        Comment


        • #24
          Originally posted by MaxToTheMax View Post
          Even if it WAS just 10 minutes, why spend that time when you can save it? Seems completely needless.
          It's roughly the time you need to get your mind into the project again. Human limitations.

          If we are talking of half an hour or more to set up or more, then yes it becomes a machine issue again.

          If it's not important enough to make plans for availability during hardware failure, it's probably not the kind of service where you're planning downtime months in advance.
          It does not need to be logical, you know.
          They nearly always under-spec stuff and ask you 99.999% uptime with planned reboots. The point is that if the hardware fails and they underspecced they can't blame me, if I reboot the server outside of the schedule then I have breached the contract and they come and get me.

          Comment


          • #25
            Originally posted by starshipeleven View Post
            It does not need to be logical, you know.
            They nearly always under-spec stuff and ask you 99.999% uptime with planned reboots. The point is that if the hardware fails and they underspecced they can't blame me, if I reboot the server outside of the schedule then I have breached the contract and they come and get me.
            Fair enough. I'm glad that you now have an option to do your job the right way without getting shit for it.

            Comment

            Working...
            X