Announcement

Collapse
No announcement yet.

The Linux Kernel Firms Up The Process For Dealing With Nasty Hardware Vulnerabilities

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Linux Kernel Firms Up The Process For Dealing With Nasty Hardware Vulnerabilities

    Phoronix: The Linux Kernel Firms Up The Process For Dealing With Nasty Hardware Vulnerabilities

    With all of the CPU security bugs over the past two years and heightened concerns about hardware vulnerabilities in general, the upstream Linux kernel has been working to create a formal process for dealing with the disclosure process and addressing said issues within the kernel code...

    http://www.phoronix.com/scan.php?pag...losure-Process

  • #2
    This feels like a bad omen...

    Comment


    • #3
      They got pretty pissed over how meltdown was handled, so I can see the kvm changes reminding them that establishing communication channels would be beneficial. Enforce vendors who don't comply on pain of being left out of tree.
      "Why doesn't my stuff work?" "hardware unsafe to use, check back in a year"

      Comment


      • #4
        I'm glad this sort of thing finally happened. With critical software such as Linux, having a reasonable way of disclosing vulnerabilities is a must.

        Comment


        • #5
          Originally posted by Snaipersky View Post
          Enforce vendors who don't comply on pain of being left out of tree.
          "Why doesn't my stuff work?" "hardware unsafe to use, check back in a year"
          That's beyond stupid way to deal with issues as this is opensource, anyone downstream can enable again what was "removed" and give a decent shot keeping it compiling for a while.
          It just forces distros to deal with the ""Why doesn't my stuff work?" issue in their own ways, causing fragmentation.

          Comment


          • #6
            Originally posted by starshipeleven View Post
            That's beyond stupid way to deal with issues as this is opensource, anyone downstream can enable again what was "removed" and give a decent shot keeping it compiling for a while.
            It just forces distros to deal with the ""Why doesn't my stuff work?" issue in their own ways, causing fragmentation.
            And the frustration will serve as a deterrent to use. If vendors don't want that treatment, then they should follow the appropriate guidelines.
            If Intel finds a Meltdown 2, then they need to go through the proper channels for managing who the vulnerability gets disclosed to, otherwise they get the big black mark in the cloud space of being unstable and unsafe, which would cost them millions to billions.

            Don't want that treatment? Don't lie to the maintainers and don't try to strongarm patch management on mainline.

            Comment


            • #7
              Originally posted by Snaipersky View Post
              And the frustration will serve as a deterrent to use.
              I said fragmentation, not frustration. It's really trivial to override whatever upstream disables.

              If Intel finds a Meltdown 2, then they need to go through the proper channels for managing who the vulnerability gets disclosed to, otherwise they get the big black mark in the cloud space of being unstable and unsafe, which would cost them millions to billions.
              They won't, for the same reason they didn't disclose it privately before the first time around.

              Linux kernel is opensource, its development is public. Any PR sending in mitigations for an unknown vulnerability will attract attention and violate non-disclosure agreements for the vulnerability, that will cost them a lot too because now they have put in deep shit Microsoft and any other vendor using their hardware that was informed secretly in advance.

              Comment


              • #8
                Originally posted by starshipeleven View Post
                ...Linux kernel is opensource, its development is public...
                The point of the artical is that there is now "a private list of security officers" working on a "non-public git repository" to preempt that sort of straw-man argument.
                Last edited by elatllat; 09-30-2019, 01:19 AM.

                Comment


                • #9
                  Originally posted by elatllat View Post
                  The point of the artical is that there is now "a private list of security officers" working on a "non-public git repository" to preempt that sort of straw-man argument.
                  1. it's not a straw man argument
                  2. even with "private git repository" you can't have the patch land in mainline and see serious feedback and testing until the embargo is over, for the above reasons.
                  The "private list of security officers" (namely Torvalds, Greg and another guy) can't really know enough about the CPU architecture and exploits to actually mitigate the issue properly (as in "without significant performance loss") themselves and still have to rely on contributions from Intel or whoever is the manufacturer.
                  This will only increase a bit the time the CPU mitigation patches will have for review (3-6 months, depending on the NDA) but won't guarantee a damn thing about quality of contribution (Intel can keep sending bs mitigations instead of fixing in hardware), and it's still limited to fixing the issue in mainline when the NDA expires.

                  Comment

                  Working...
                  X