Announcement

Collapse
No announcement yet.

Solaris 11.4 To Move From GNOME 2 Desktop To GNOME Shell

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Vistaus
    replied
    Originally posted by oiaohm View Post

    Really just look at recent KPTI patches. 4.14 and newer Linux kernel gets one version due to those kernel having a structure change and 4.13 and before get a different version to work around security flaw. The 4.14 and newer is complete and correct fix to the issue. This happens over and over again with back ported security patches.
    4.13 and 4.14 are a moot point as they're not even an LTS kernel, so corporations aren't even using it.

    Leave a comment:


  • Vistaus
    replied
    If stability above anything else is so extremely important for corporations, then why is Google switching to a rolling release model for its production computers? They are huge so wouldn't they need extreme-tested, stable code?
    http://news.softpedia.com/news/googl...x-519426.shtml
    Last edited by Vistaus; 17 January 2018, 05:19 PM.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by pavlerson View Post
    Eh? So you do believe that new code is more bug ridden than old mature code? Are you serious or trolling? I guess you are trolling. Ask anyone who works or have worked in programming, and he will tell you that you are wrong. You clearly know nothing about programming, and have never worked as a programmer. You should not speak about things, you know nothing about. Ask any programmer about new code vs old code.
    Really just look at recent KPTI patches. 4.14 and newer Linux kernel gets one version due to those kernel having a structure change and 4.13 and before get a different version to work around security flaw. The 4.14 and newer is complete and correct fix to the issue. This happens over and over again with back ported security patches.

    If Red Hat use an old and battle tested kernel instead of the latest kernel - that is because the old kernel probably has less bugs and therefore is more stable. Production never use bleeding edge software, they always use old software.

    The reality is newer code when you are talking about the Linux kernel is less bug ridden. The old so call mature Linux kernel code is old stacked with bad backports of patches that have flaws due to not being able to alter the internal structures.

    pavlerson it was you who put up the false idea that new code is more bug ridden than so called old mature code. Your example is pure bogus. Lot of examples put forwards for old mature code being less bug ridden than new code fail closer inspection.

    Leave a comment:


  • pavlerson
    replied
    Originally posted by oiaohm View Post

    Lets point out how this simple statement is so far wrong its not funny, Bug tracking by Redhat on their kernels shows without question using old kernels have more bugs than using newer kernels. What is going on here. Simple new kernel gets to alter internal structures to fix serous issues old kernel has to have the patches modified to kept the same internal structs to deal with the same issues and in a lot of cases being stuck with the same internal structs makes repairing bugs impossible..

    So what you said is false. The real world probably if you use real world numbers instead of guess work is that old Linux kernel will have more bugs than using a more modern version. This is also coming from the fact that more complete quality control is used on Linux next branch than on the older LTS branches and older.
    Eh? So you do believe that new code is more bug ridden than old mature code? Are you serious or trolling? I guess you are trolling. Ask anyone who works or have worked in programming, and he will tell you that you are wrong. You clearly know nothing about programming, and have never worked as a programmer. You should not speak about things, you know nothing about. Ask any programmer about new code vs old code.

    But.... Hahaha! You are funny! I almost fell in your trap. It sounded like you really meant it!

    Leave a comment:


  • aht0
    replied
    Originally posted by starshipeleven View Post
    I've seen far more C# in company servers than Java. I've seen also a fuckton of Cobol programs still in use for banks and finance, and they still hire Cobol programmers here.
    AFAIK there's quite a shortage of COBOL programmers because language is almost 6 decades old, "old hands" are retiring/have retired and there is not enough new COBOL programmers replacing them.

    Leave a comment:


  • grok
    replied
    Originally posted by calum View Post

    Because GNOME Shell is actively maintained by the community, and GNOME 2 isn't...?
    Wow, someone got it!

    Leave a comment:


  • dragon321
    replied
    oiaohm rtfazeberdee Thank You for answers.

    Leave a comment:


  • calum
    replied
    Originally posted by ElectricPrism View Post

    Why...
    Because GNOME Shell is actively maintained by the community, and GNOME 2 isn't...?

    Leave a comment:


  • oiaohm
    replied
    Originally posted by pavlerson View Post
    If Red Hat use an old and battle tested kernel instead of the latest kernel - that is because the old kernel probably has less bugs and therefore is more stable. Production never use bleeding edge software, they always use old software.
    Lets point out how this simple statement is so far wrong its not funny, Bug tracking by Redhat on their kernels shows without question using old kernels have more bugs than using newer kernels. What is going on here. Simple new kernel gets to alter internal structures to fix serous issues old kernel has to have the patches modified to kept the same internal structs to deal with the same issues and in a lot of cases being stuck with the same internal structs makes repairing bugs impossible..

    So what you said is false. The real world probably if you use real world numbers instead of guess work is that old Linux kernel will have more bugs than using a more modern version. This is also coming from the fact that more complete quality control is used on Linux next branch than on the older LTS branches and older. The newest features for detect coder errors are used in the Linux kernel next branch first it might take 4 to 5 years for the old kernel in something old versions of redhat enterprise to have those tools run over it. Not only is the kernel old the quality tools used to validate quality of the code in the kernel is old.

    So Enterprise people are very much like the Emperors new clothes. Yes running around effectively security starkers yet attempting to claim to be well dressed.

    Its one thing to say we want to remain on old code because it should be lower bug(this is Faith) is another thing to be running that old code without using up-to date validation solutions run over it to make sure it is in fact quality(this is engineering).

    The problem here Enterprise lot of time believes in faith with software instead of proper engineering that says you should be able validate your design as correct and you should update your validation methods based on latest discoveries.

    Even more false when you look at a old version of sel4 vs a new version of sel4. Due to sel4 being mathematically certified the newer versions of sel4 always have less bugs than the older versions.

    It costs more to make quality code and keep proper updated validation systems. Believing the myths allows Enterprise to avoid having to reach into pocket and spend more on software development/auditing and maintenance..

    Leave a comment:


  • pavlerson
    replied
    Originally posted by starshipeleven View Post
    There is no force in the heavens and in hell that goes and forces anyone to update their systems. Many devices and businness servers are using some kind of outdated OS for the sake of running their core software.

    But it's irrelevant because they are NOT accessible from outside of their own company network, and they don't care of better performance or something.

    This argument is bullshit, because updating the UPSTREAM of an application or OS or anything does not affect deployed systems. This isn't Windows with mandatory updates, most database servers have total shit security anyway as they are mostly dumb appliances sitting in a secured internal network.
    I am glad that you agree in Enterprise IT stability is the highest priority, which is why they dont want any change.


    Originally posted by starshipeleven View Post
    I've seen far more C# in company servers than Java. I've seen also a fuckton of Cobol programs still in use for banks and finance, and they still hire Cobol programmers here.
    If you have seen more C# in company servers than Java, then it is because you are not working with the large business servers. Then you are working with the clients. For instance, stock exchanges. The stock exchange is exclusively powered by C/C++ or Java running on Unix or Linux. The banks that connect to the stock exchange, have often built their trading server in C#. But that trading server is just a client to the real Enterprise server: the stock exchange. So when people say they see more C# than Java or C++, then they are not working with the large enterprise servers. For instance, London Stock Exchange tried to rewrite their stock exchange in C# - but it crashed now and then. So LSE bought MilleniumIT which develops stock exchange systems running on Solaris/Linux. So I repeat: no large enterprise server runs C#. It is exclusively C/C++ or Java. Only the clients on the desktops run C#.

    BTW, Cobol is powering large enterprise mainframes. So in that case, you work with the true large servers.

    Leave a comment:

Working...
X