Announcement

Collapse
No announcement yet.

Oracle's Ksplice Live Kernel Patching Picks Up Known Exploit Detection

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Oracle's Ksplice Live Kernel Patching Picks Up Known Exploit Detection

    Phoronix: Oracle's Ksplice Live Kernel Patching Picks Up Known Exploit Detection

    One of the areas of Oracle Linux and its "Unbreakable Enterprise Kernel" that the company continues investing in and differentiating it from upstream RHEL and alternatives is around Ksplice as their means of live kernel patching while Red Hat continues with Kpatch and SUSE with kGraft...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Dear Oracle,

    Something, something, ZFS, something, something, license, something, something, darkside.

    Signed,
    ZoL Users

    On a serious note, are any of the kernel patching tools even worth using on a desktop? Since one would have to restart all services, the gui, possibly write custom kernel patches...just seems like a lot of effort to skip the bios/uefi step. I know that for me, if it didn't "just work" and required manual intervention, it would be faster to do it the regular way -- update via the package manager and reboot -- since it's a desktop and the reboot downtime is irrelevant.

    Also seems like it would just make more sense to have two or three servers, update one of them and switch users over to it if it works, then update the other one...manage them in a tick-tock method...

    I'm not saying they aren't useful or anything like that, just that none of them seem like the first tool in the box one would want to use if they have alternatives or if a minute of downtime doesn't matter.

    Comment


    • #3
      Bells and whistles.

      Comment


      • #4
        Originally posted by skeevy420 View Post
        Also seems like it would just make more sense to have two or three servers, update one of them and switch users over to it if it works, then update the other one...manage them in a tick-tock method...
        Why is people assuming you can freely and easily allocate dozens of servers for everything? In many cases you have only one. The customer won't pay for more than one.

        Comment


        • #5
          Originally posted by starshipeleven View Post
          Why is people assuming you can freely and easily allocate dozens of servers for everything? In many cases you have only one. The customer won't pay for more than one.
          If your customer isn't willing to pay for redundancy, they obviously don't care about keeping their systems up, and they deserve having downtime on their critical servers. You can't have one without the other, even when you put the effort needed to live-patch your kernel.

          Comment


          • #6
            Originally posted by DoMiNeLa10 View Post
            If your customer isn't willing to pay for redundancy, they obviously don't care about keeping their systems up
            Wrong. RAID and redundant hot-swappable PSUs are a thing for servers deployed alone. It's pretty hard to take them down with a hardware failure.

            Comment


            • #7
              Originally posted by starshipeleven View Post
              Wrong. RAID and redundant hot-swappable PSUs are a thing for servers deployed alone. It's pretty hard to take them down with a hardware failure.
              You cannot hot-swap everything.

              Comment


              • #8
                Originally posted by sdack View Post
                You cannot hot-swap everything.
                It's a blade server, yo. That thing has no redundancy per-se as you are supposed to run more of them as a cluster or something.
                Last edited by starshipeleven; 21 April 2019, 02:33 PM.

                Comment


                • #9
                  Originally posted by starshipeleven View Post
                  Wrong. RAID and redundant hot-swappable PSUs are a thing for servers deployed alone. It's pretty hard to take them down with a hardware failure.
                  You can get only so far with backups on the same machine. Software failure will take it down, and even redundant systems can't fail. If you aren't running a cluster of redundant machines, preferably multiple clusters in multiple locations, you're asking for downtime.

                  Comment


                  • #10
                    Originally posted by starshipeleven View Post
                    It's a blade server, yo. ...
                    Look closer... It's a burned blade server.

                    Just because a server has got hot-swapping components doesn't mean it's save. These components can still kill the entire server.

                    Worse, the concept of hot-swapping can lead people to think it's ok to use cheaper components. What could go wrong?! ...
                    Last edited by sdack; 21 April 2019, 03:22 PM.

                    Comment

                    Working...
                    X