Announcement

Collapse
No announcement yet.

Linux Developers Look At Upping The GCC Requirements For Building The Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by timofonic View Post
    Why so much ancient stuff? There should be a policy about having newer kernels.
    There *is* a policy about having newer kernels - "stay on the old one". You just don't like the policy

    These are enterprise distros running on servers so most of the newer things you mentioned do not apply, except for cases like remote visualization/rendering. That said, those remote graphics cases are becoming more important - RH has been updating the drm (graphics) subsystem in older kernels for quite a while now, which is a pretty good compromise between "them that wants stable kernels" and "them that wants new (graphics) features".
    Last edited by bridgman; 17 December 2016, 03:44 AM.
    Test signature

    Comment


    • #22
      Originally posted by timofonic View Post
      Why so much ancient stuff?
      The usual answer is “because of something proprietary”. There was some piece of software or a custom interface to some piece of hardware that was developed years ago, and the company that created it has gone out of business, or discontinued that product, or wants to charge an arm and a leg to upgrade to the new version. The user figures “if it ain’t broke, don’t fix it”. That is, until it breaks...

      It’s an attitude that invariably leads to trouble. Do you consider IT a strategic asset to your company, or just an unavoidable expense? If it was strategic, you would not let yourself be maneouvred into a corner like this.

      Comment


      • #23
        Originally posted by bridgman View Post

        There *is* a policy about having newer kernels - "stay on the old one". You just don't like the policy

        These are enterprise distros running on servers so most of the newer things you mentioned do not apply, except for cases like remote visualization/rendering. That said, those remote graphics cases are becoming more important - RH has been updating the drm (graphics) subsystem in older kernels for quite a while now, which is a pretty good compromise between "them that wants stable kernels" and "them that wants new (graphics) features".
        So far, to me it doesn't sound like good compromise for Fedora. Ubuntu and/or Android will keep pressing forward, and if the kernel devs want to stay attached to software 10 years in the past, then Ubuntu/Android will create a forward fork of the kernel the same way RH now has a backward fork. So the gap between Fedora and Ubuntu/Android will widen, and desktop/laptop/phone/tablet users will move forward this way, not that way. Eventually Linux will become two separate operating systems.
        Last edited by indepe; 17 December 2016, 04:44 AM.

        Comment


        • #24
          Not sure I understand. There are newer major releases of Ubuntu LTS, RHEL and SLES coming out on a regular basis as well with new kernels - enterprise customers choose to stay on the older major releases / kernels, they are not forced to do so unless there is some nuance I missed.

          That said, my impression is that new userspace is at least as big an obstacle to adopting a newer distro stack as dealing with a new kernel, probably more if anything.
          Last edited by bridgman; 17 December 2016, 06:27 AM.
          Test signature

          Comment


          • #25
            Originally posted by bridgman View Post
            Not sure I understand. There are newer major releases of Ubuntu LTS, RHEL and SLES coming out on a regular basis as well with new kernels - enterprise customers choose to stay on the older major releases / kernels, they are not forced to do so unless there is some nuance I missed.

            That said, my impression is that new userspace is at least as big an obstacle to adopting a newer distro stack as dealing with a new kernel, probably more if anything.
            I don't have an issue with some customers staying on older versions (although I wonder how much more work they will have to do, once the release lifetime does end, which it will.)

            My point is that those older releases (especially past 3 years or so) will need to do their own maintenance, and not, for example, ask the latest upstream kernel to support 10-year old versions of GCC. (10 years of support is the commitment RH is making.)

            [That is referring to the original topic of this article. Apparently your statement is not connected to that anymore.]
            Last edited by indepe; 17 December 2016, 06:50 AM.

            Comment


            • #26
              Originally posted by timofonic View Post
              WHY???!?!?!??
              1. an old kernel with backported secuirty fixes is safer than a newer kernel, because it lacks new stuff that could have more bugs.

              2. see my above posts about why RHEL does not need newer kernels

              Comment


              • #27
                Originally posted by starshipeleven View Post
                1. an old kernel with backported secuirty fixes is safer than a newer kernel, because it lacks new stuff that could have more bugs.

                2. see my above posts about why RHEL does not need newer kernels
                Not that I am expecting to tell you anything new, but newer technologies can be more reliable than old technologies.

                Comment


                • #28
                  As far as I can remember (not the Linux area), most software 10 years ago was distinctly less reliable than what we have today. And the same 20 years ago.

                  EDIT: It may be different in the Linux area, but my experience so far is that new technologies first have more bugs, then after half-a-year to 2 years, they become more reliable than previous tech.

                  So if I wanted or needed max reliability, what I would do is this: always update to the next LTS once it is about a year old, unless there are still specific known issues.

                  And if I were RHEL, I'd install, in so far as possible and practical, additionally packages, for both forward-compability on the older LTS, and for backward-compability on the newer LTS, that ease migration from the older LTS to the newer LTS.
                  Last edited by indepe; 17 December 2016, 11:21 AM.

                  Comment


                  • #29
                    Originally posted by Xelix View Post

                    You are forgetting about people who want to run a more recent kernel on RH6/7. This sounds like a rather common use case to me.
                    I greatly doubt it and I worked in a couple of 100+ or even 1000+ server installations.
                    If you use RHEL, you use it for one and only one reason: stability and support.
                    There's little reason to use upstream kernel as it will:
                    - Hinder stability.
                    - No more official support.
                    - RedHat back ports a -lot- of upstream code (drivers, virtualization, file-systems, etc) back to RHEL 6 and 7.

                    - Gilboa
                    oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
                    oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
                    oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
                    Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

                    Comment

                    Working...
                    X