That I used the term "fire-and-forget" to describe cron driven server administration I thought made clear that this is something I look down upon. Certainly I do not recommend such behaviour. It must be noted however that Debian is the only distro in which this does not immediately lead to disaster and therefore gained some popularity among Debian admins, sadly.
Finding out who checked in vulnerable code is not for placing blame, but it allows to check that person's other code contributions for similar problems.
For closed source development that check can be done only by a small group of people who have access to the source code. Sometimes (e.g. when an employee/contractor worked for several companies) there might be nobody else who can examine everything.
Also, serious security vulnerabilities sometimes remain unfixed or are silently fixed in closed source code after the vendor becomes aware of them, something which open source projects usually cannot afford.
"Sometimes" this and "sometimes" that. There are a lot of "sometimes" for open software too. What is true is that QA needs to happen correctly for both open as well as closed source software. And "sometimes" this doesn't happen for either.
Just because a bug was found in a closed source program doesn't prove anything. Lots of bugs are found in open projects too. In the Linux kernel, problems are very often not disclosed at all until the fix is in place. There's a whole business right now around keeping Linux bugs secret up until the patches are developed and go live.
Last edited by RealNC; 04-12-2012 at 12:19 PM.
The fact is Open Source is more secure by its nature and it's even true when there's exactly the same number of security flaws found in Open Source and closed source project. The reason of this is everyone can check if there are security flaws in Open Source projects, so nobody can hide anything (but some smart guys can keep them in secret till someone else discovers the flaw) and in closed source world just very limited number of people can check the code - so the chance to discover the flaw is lower. To sum this up:Just because a bug was found in a closed source program doesn't prove anything. Lots of bugs are found in open projects too. In the Linux kernel, problems are very often not disclosed at all until the fix is in place. There's a whole business right now around keeping Linux bugs secret up until the patches are developed and go live.
10 holes in Windows is more than 10 holes in Linux, because there's lesser probability to discover a flaw in closed source software. PS. Ignore popularity, because those are just examples of Open and closed source software.
Last edited by kraftman; 04-12-2012 at 02:20 PM.
I'm not convinced. Unless you mean commercial open source software, where audits happen mostly by paid professionals. In that case, I fully agree; commercial AND open source is a strong combination. Otherwise, you're relying on volunteers.
I can still think of counter-examples though. A security flaw in a closed source program that can't be discovered is of no great importance. A security flaw in open code could be spotted by the wrong people. I don't like the "security through obscurity" approach myself, but it does make you think, and I often apply it if it doesn't interfere with more clean security policies.
Last edited by RealNC; 04-12-2012 at 03:51 PM.