Announcement

Collapse
No announcement yet.

Canonical Extends Ubuntu LTS Support To 12 Years For Ubuntu Pro Customers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Daktyl198 View Post

    In enterprise, a stable system is the goal. Not the newest software. If the software you need works on a machine running Ubuntu 16.04 and nothing ever crashes, you want to LEAVE IT THERE as long as you possibly can. Upgrading WILL cause minor issues, and minor issues are downtime and money lost. I've seen a machine (not connected to the internet) running Windows 98 and who knows what Java version to maintain a geriatric finance application that the company SWORE they needed for whatever reason. It worked for them.
    We call this "tech debt". You have to pay someday, and we always put it off as long as we can, but putting it off should not be done for it's own sake, only to save costs. Eventually, you have to tear off the band-aid, update everything, and fix all the bugs. It's much cheaper to do it regularly/often, rather than in one big multi-billion dollar lump at the 15 or 20 year mark. Also causes less chaos overall.

    So no, "we have always done it like this" is not a good enough justification for continuing in the same manner. If your management doesn't see this, the problem *is* your management.

    Comment


    • #42
      Originally posted by sophisticles View Post

      I just need to know, what did Ultra ever do to you?

      You chose a nickname "F Ultra", i don't think that's cool, he probably doesn't even know you.
      There is a . and not a space there, so quite the difference. Also it's not in English so the F does not represent what you think it does, it is all based on a localized version of a Gary Larson strip from ages ago that I don't think was ever published Internationally since the joke only works in Swedish.

      Comment


      • #43
        Originally posted by Forge View Post

        We call this "tech debt". You have to pay someday, and we always put it off as long as we can, but putting it off should not be done for it's own sake, only to save costs. Eventually, you have to tear off the band-aid, update everything, and fix all the bugs. It's much cheaper to do it regularly/often, rather than in one big multi-billion dollar lump at the 15 or 20 year mark. Also causes less chaos overall.

        So no, "we have always done it like this" is not a good enough justification for continuing in the same manner. If your management doesn't see this, the problem *is* your management.
        This decision is actually more often coming from the IT department itself rather than from management. I once worked at a place where we foolishly merged with our worst competitor (I left that sinking ship quite soon after) and that other company:s IT department had spent the last 6 months upgrading to a new version of GCC on their solaris machines.

        People here who think that rolling distros is the GOAT should sometimes try and work some time at any financial institution, like a bank, to get some perspective on how differently some IT departments reasons about things like these.

        Comment


        • #44
          Originally posted by Forge View Post

          We call this "tech debt". You have to pay someday, and we always put it off as long as we can, but putting it off should not be done for it's own sake, only to save costs. Eventually, you have to tear off the band-aid, update everything, and fix all the bugs. It's much cheaper to do it regularly/often, rather than in one big multi-billion dollar lump at the 15 or 20 year mark. Also causes less chaos overall.

          So no, "we have always done it like this" is not a good enough justification for continuing in the same manner. If your management doesn't see this, the problem *is* your management.
          Yeah it's not a good thing. I was just pointing out that it exists, and people have "valid" reasons for it. At least that's what they tell themselves. I never advocate for being on the bleeding edge, but neither should you be that far behind.

          Comment


          • #45
            Originally posted by Daktyl198 View Post

            In enterprise, a stable system is the goal. Not the newest software. If the software you need works on a machine running Ubuntu 16.04 and nothing ever crashes, you want to LEAVE IT THERE as long as you possibly can. Upgrading WILL cause minor issues, and minor issues are downtime and money lost. I've seen a machine (not connected to the internet) running Windows 98 and who knows what Java version to maintain a geriatric finance application that the company SWORE they needed for whatever reason. It worked for them.
            Exactly. A project I am currently working on is upgrading a control system (city wide transport infrastructure) that currently runs on Ubuntu 8.04 (servers) and 10.04 (HMIs). The system has been live barring one outage caused by network firmware bug since 2011. Every bit of software, library, database etc... is all extensivly tested and validated against the OS and library versions that it's shipped with.
            The system is being upgraded (via side-by-side deployment) to a validated Ubuntu 16.04 system as a replacement. A future deployment of another instance of the system is in development and undergoing validation for Ubuntu 20.04. The validation of all these components is extensive, expensive, time consuming and a legal/contractual obligation.

            Comment


            • #46
              Originally posted by F.Ultra View Post
              This decision is actually more often coming from the IT department itself rather than from management. I once worked at a place where we foolishly merged with our worst competitor (I left that sinking ship quite soon after) and that other company's IT department had spent the last 6 months upgrading to a new version of GCC on their solaris machines.
              In my own experience, the cost/pain involved in upgrading the stack onto which applications run it is not so much a function of the underlying OS or language/framework (linux vs windows, php vs. java, each has its own stability promises and has had its fair share of unintended breakages), as much a question of "good hygiene" from the application developers.
              Not taking advantage of every bell and whistle available, relying only on widely deployed technology, using properly layered architecture, SOLID, etc... goes a long way.
              An example of what I am talking about: write your shell scripts in vanilla sh instead of bash.

              When the developers are allowed not to think about app portability and future upgrades of the stack, they tend to abuse it in the most critical and comical way*.
              Otoh, if they know that the stack will be upgraded frequently, they are forced from day one to put in that extra care - and the resulting system will be better off for it - regardless of the fact that the upgrade cost will be spent in frequent small updates or in infrequent big-bang revolutions. And I believe it also reduces the number of bugs not related to upgrades. An investment with nice returns.

              * = a real-life anecdote, from working in an environment where stability was at a premium. We had an app which had been running ok for years, but it kept crashing whenever ran on recent processors - and ofc no support contract for it. In order to be able to keep the app running 24/7 IT support resorted to keeping a locker full of old PCs as spares, until it became too much of a burden finding them - the older they got, the more frequently they would break down. We finally managed to get the original developer back on board with the promise of big $$ for a fix. When asked what was the root cause of the issue, he said that he had coded in a watchdog: to make sure the app was not "stuck" and unresponsive, it would run every couple of minutes, and terminate the app if needed. Instead of relying on the common practice of sending a ping/pong signal via a socket, or checking for the last update of some data structure, the guy had coded the watchdog based on... CPU usage! A lightly-loaded CPU meant the app was not doing its job, and was thus killed on the spot. You can guess how well that played with Moores' law :-D

              Last edited by gggeek; 27 March 2024, 07:14 AM.

              Comment


              • #47
                Originally posted by xAlt7x View Post
                I can't recommend OS that provides packages with known security vulnerabilities.
                The "Universe" repo is enabled by default and user will eventually install some package with known vulnerability from there (e.g. "ImageMagick" , "jQuery UI", "OpenEXR"​, "Exo").
                Indeed it came as quite a surprise when I found out that there are packages you can get installed from "stock" Ubuntu repos and which do not get any maintenance from them, unless you get Pro support.
                I wonder how many other Ubuntu users out there are actually aware of this...

                Comment


                • #48
                  Originally posted by varikonniemi View Post

                  They are backporting fixes for critical and high (+some medium) severity security bugs of the entire main ubuntu repository + kernel(s). I don't know if they must have a report of someone using some package for it to be backported (maybe this is a reason for subscription-based service, your system phones home to tell what packages you use?)
                  I don't know about Canonical, but all other vendors I had to deal with produced patches in ELTS/post-LTS periods only for issues reported by customers. No-one would proactively backport all security issues found at large - after all most issues reported at large would be for more recent versions of the same applications, making it hard and time consuming even simply to find out if they apply.

                  Otoh yesterday I checked the stats of the bugs patched by Freexian for Debian ELTS, and the numbers are higher than I expected...

                  Comment


                  • #49
                    Originally posted by gggeek View Post

                    In my own experience, the cost/pain involved in upgrading the stack onto which applications run it is not so much a function of the underlying OS or language/framework (linux vs windows, php vs. java, each has its own stability promises and has had its fair share of unintended breakages), as much a question of "good hygiene" from the application developers.
                    Not taking advantage of every bell and whistle available, relying only on widely deployed technology, using properly layered architecture, SOLID, etc... goes a long way.
                    An example of what I am talking about: write your shell scripts in vanilla sh instead of bash.

                    When the developers are allowed not to think about app portability and future upgrades of the stack, they tend to abuse it in the most critical and comical way*.
                    Otoh, if they know that the stack will be upgraded frequently, they are forced from day one to put in that extra care - and the resulting system will be better off for it - regardless of the fact that the upgrade cost will be spent in frequent small updates or in infrequent big-bang revolutions. And I believe it also reduces the number of bugs not related to upgrades. An investment with nice returns.

                    * = a real-life anecdote, from working in an environment where stability was at a premium. We had an app which had been running ok for years, but it kept crashing whenever ran on recent processors - and ofc no support contract for it. In order to be able to keep the app running 24/7 IT support resorted to keeping a locker full of old PCs as spares, until it became too much of a burden finding them - the older they got, the more frequently they would break down. We finally managed to get the original developer back on board with the promise of big $$ for a fix. When asked what was the root cause of the issue, he said that he had coded in a watchdog: to make sure the app was not "stuck" and unresponsive, it would run every couple of minutes, and terminate the app if needed. Instead of relying on the common practice of sending a ping/pong signal via a socket, or checking for the last update of some data structure, the guy had coded the watchdog based on... CPU usage! A lightly-loaded CPU meant the app was not doing its job, and was thus killed on the spot. You can guess how well that played with Moores' law :-D
                    heheheh One of the first fixes I did in an old workplace was a piece of software that they had that crashed at the end of every day, the support staff then had to open the source code, edit a value, recompile and start the process the next morning again. This due to the original developer being to lazy to read in a value produced by another application on a daily basis so he just called exit(1).

                    Originally posted by gggeek View Post

                    I don't know about Canonical, but all other vendors I had to deal with produced patches in ELTS/post-LTS periods only for issues reported by customers. No-one would proactively backport all security issues found at large - after all most issues reported at large would be for more recent versions of the same applications, making it hard and time consuming even simply to find out if they apply.

                    Otoh yesterday I checked the stats of the bugs patched by Freexian for Debian ELTS, and the numbers are higher than I expected...
                    AFAIK Ubuntu does this when there have been CVE:s issued so it's not based on user demand.
                    Last edited by F.Ultra; 27 March 2024, 09:44 AM.

                    Comment

                    Working...
                    X