Originally posted by oiaohm
View Post
Announcement
Collapse
No announcement yet.
Solaris 11.4 To Move From GNOME 2 Desktop To GNOME Shell
Collapse
X
-
-
If stability above anything else is so extremely important for corporations, then why is Google switching to a rolling release model for its production computers? They are huge so wouldn't they need extreme-tested, stable code?
http://news.softpedia.com/news/googl...x-519426.shtmlLast edited by Vistaus; 17 January 2018, 05:19 PM.
Leave a comment:
-
Originally posted by pavlerson View PostEh? So you do believe that new code is more bug ridden than old mature code? Are you serious or trolling? I guess you are trolling. Ask anyone who works or have worked in programming, and he will tell you that you are wrong. You clearly know nothing about programming, and have never worked as a programmer. You should not speak about things, you know nothing about. Ask any programmer about new code vs old code.
If Red Hat use an old and battle tested kernel instead of the latest kernel - that is because the old kernel probably has less bugs and therefore is more stable. Production never use bleeding edge software, they always use old software.
The reality is newer code when you are talking about the Linux kernel is less bug ridden. The old so call mature Linux kernel code is old stacked with bad backports of patches that have flaws due to not being able to alter the internal structures.
pavlerson it was you who put up the false idea that new code is more bug ridden than so called old mature code. Your example is pure bogus. Lot of examples put forwards for old mature code being less bug ridden than new code fail closer inspection.
Leave a comment:
-
Originally posted by oiaohm View Post
Lets point out how this simple statement is so far wrong its not funny, Bug tracking by Redhat on their kernels shows without question using old kernels have more bugs than using newer kernels. What is going on here. Simple new kernel gets to alter internal structures to fix serous issues old kernel has to have the patches modified to kept the same internal structs to deal with the same issues and in a lot of cases being stuck with the same internal structs makes repairing bugs impossible..
So what you said is false. The real world probably if you use real world numbers instead of guess work is that old Linux kernel will have more bugs than using a more modern version. This is also coming from the fact that more complete quality control is used on Linux next branch than on the older LTS branches and older.
But.... Hahaha! You are funny! I almost fell in your trap. It sounded like you really meant it!
- Likes 1
Leave a comment:
-
Originally posted by starshipeleven View PostI've seen far more C# in company servers than Java. I've seen also a fuckton of Cobol programs still in use for banks and finance, and they still hire Cobol programmers here.
Leave a comment:
-
Originally posted by pavlerson View PostIf Red Hat use an old and battle tested kernel instead of the latest kernel - that is because the old kernel probably has less bugs and therefore is more stable. Production never use bleeding edge software, they always use old software.
So what you said is false. The real world probably if you use real world numbers instead of guess work is that old Linux kernel will have more bugs than using a more modern version. This is also coming from the fact that more complete quality control is used on Linux next branch than on the older LTS branches and older. The newest features for detect coder errors are used in the Linux kernel next branch first it might take 4 to 5 years for the old kernel in something old versions of redhat enterprise to have those tools run over it. Not only is the kernel old the quality tools used to validate quality of the code in the kernel is old.
So Enterprise people are very much like the Emperors new clothes. Yes running around effectively security starkers yet attempting to claim to be well dressed.
Its one thing to say we want to remain on old code because it should be lower bug(this is Faith) is another thing to be running that old code without using up-to date validation solutions run over it to make sure it is in fact quality(this is engineering).
The problem here Enterprise lot of time believes in faith with software instead of proper engineering that says you should be able validate your design as correct and you should update your validation methods based on latest discoveries.
Even more false when you look at a old version of sel4 vs a new version of sel4. Due to sel4 being mathematically certified the newer versions of sel4 always have less bugs than the older versions.
It costs more to make quality code and keep proper updated validation systems. Believing the myths allows Enterprise to avoid having to reach into pocket and spend more on software development/auditing and maintenance..
Leave a comment:
-
Originally posted by starshipeleven View PostThere is no force in the heavens and in hell that goes and forces anyone to update their systems. Many devices and businness servers are using some kind of outdated OS for the sake of running their core software.
But it's irrelevant because they are NOT accessible from outside of their own company network, and they don't care of better performance or something.
This argument is bullshit, because updating the UPSTREAM of an application or OS or anything does not affect deployed systems. This isn't Windows with mandatory updates, most database servers have total shit security anyway as they are mostly dumb appliances sitting in a secured internal network.
Originally posted by starshipeleven View PostI've seen far more C# in company servers than Java. I've seen also a fuckton of Cobol programs still in use for banks and finance, and they still hire Cobol programmers here.
BTW, Cobol is powering large enterprise mainframes. So in that case, you work with the true large servers.
Leave a comment:
Leave a comment: