Announcement

Collapse
No announcement yet.

NVIDIA 295.40 Closes High-Risk Security Flaw

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    That I used the term "fire-and-forget" to describe cron driven server administration I thought made clear that this is something I look down upon. Certainly I do not recommend such behaviour. It must be noted however that Debian is the only distro in which this does not immediately lead to disaster and therefore gained some popularity among Debian admins, sadly.

    Finding out who checked in vulnerable code is not for placing blame, but it allows to check that person's other code contributions for similar problems.

    Comment


    • #12
      Originally posted by chithanh View Post
      Finding out who checked in vulnerable code is not for placing blame, but it allows to check that person's other code contributions for similar problems.
      I don't see a difference with closed source development. Same can happen there.

      Comment


      • #13
        For closed source development that check can be done only by a small group of people who have access to the source code. Sometimes (e.g. when an employee/contractor worked for several companies) there might be nobody else who can examine everything.

        Also, serious security vulnerabilities sometimes remain unfixed or are silently fixed in closed source code after the vendor becomes aware of them, something which open source projects usually cannot afford.

        Comment


        • #14
          "Sometimes" this and "sometimes" that. There are a lot of "sometimes" for open software too. What is true is that QA needs to happen correctly for both open as well as closed source software. And "sometimes" this doesn't happen for either.

          Just because a bug was found in a closed source program doesn't prove anything. Lots of bugs are found in open projects too. In the Linux kernel, problems are very often not disclosed at all until the fix is in place. There's a whole business right now around keeping Linux bugs secret up until the patches are developed and go live.
          Last edited by RealNC; 12 April 2012, 12:19 PM.

          Comment


          • #15
            Originally posted by RealNC View Post
            "Sometimes" this and "sometimes" that. There are a lot of "sometimes" for open software too. What is true is that QA needs to happen correctly for both open as well as closed source software. And "sometimes" this doesn't happen for either.
            The "sometimes" is different in Open Source and closed source.

            Just because a bug was found in a closed source program doesn't prove anything. Lots of bugs are found in open projects too. In the Linux kernel, problems are very often not disclosed at all until the fix is in place. There's a whole business right now around keeping Linux bugs secret up until the patches are developed and go live.
            The fact is Open Source is more secure by its nature and it's even true when there's exactly the same number of security flaws found in Open Source and closed source project. The reason of this is everyone can check if there are security flaws in Open Source projects, so nobody can hide anything (but some smart guys can keep them in secret till someone else discovers the flaw) and in closed source world just very limited number of people can check the code - so the chance to discover the flaw is lower. To sum this up:

            10 holes in Windows is more than 10 holes in Linux, because there's lesser probability to discover a flaw in closed source software. PS. Ignore popularity, because those are just examples of Open and closed source software.
            Last edited by kraftman; 12 April 2012, 02:20 PM.

            Comment


            • #16
              I'm not convinced. Unless you mean commercial open source software, where audits happen mostly by paid professionals. In that case, I fully agree; commercial AND open source is a strong combination. Otherwise, you're relying on volunteers.

              Edit:
              I can still think of counter-examples though. A security flaw in a closed source program that can't be discovered is of no great importance. A security flaw in open code could be spotted by the wrong people. I don't like the "security through obscurity" approach myself, but it does make you think, and I often apply it if it doesn't interfere with more clean security policies.
              Last edited by RealNC; 12 April 2012, 03:51 PM.

              Comment


              • #17
                Originally posted by RealNC View Post
                I'm not convinced. Unless you mean commercial open source software, where audits happen mostly by paid professionals. In that case, I fully agree; commercial AND open source is a strong combination. Otherwise, you're relying on volunteers.

                Edit:
                I can still think of counter-examples though. A security flaw in a closed source program that can't be discovered is of no great importance. A security flaw in open code could be spotted by the wrong people. I don't like the "security through obscurity" approach myself, but it does make you think, and I often apply it if it doesn't interfere with more clean security policies.
                There are other factors that are much more important than just being Open Source. What I wrote is only true for comparable projects - similar number of devs, similar skills, similar policy etc. Open Source brings more possibilities, but they have to be exploited and it's just one of the factors. One of the Open Source advantages is anyone can check if there aren't any backdoors in software or some company can order a third party member to check the code.

                Comment

                Working...
                X