Announcement

Collapse
No announcement yet.

Linux Developers Look At Upping The GCC Requirements For Building The Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • gilboa
    replied
    Originally posted by Xelix View Post

    You are forgetting about people who want to run a more recent kernel on RH6/7. This sounds like a rather common use case to me.
    I greatly doubt it and I worked in a couple of 100+ or even 1000+ server installations.
    If you use RHEL, you use it for one and only one reason: stability and support.
    There's little reason to use upstream kernel as it will:
    - Hinder stability.
    - No more official support.
    - RedHat back ports a -lot- of upstream code (drivers, virtualization, file-systems, etc) back to RHEL 6 and 7.

    - Gilboa

    Leave a comment:


  • indepe
    replied
    As far as I can remember (not the Linux area), most software 10 years ago was distinctly less reliable than what we have today. And the same 20 years ago.

    EDIT: It may be different in the Linux area, but my experience so far is that new technologies first have more bugs, then after half-a-year to 2 years, they become more reliable than previous tech.

    So if I wanted or needed max reliability, what I would do is this: always update to the next LTS once it is about a year old, unless there are still specific known issues.

    And if I were RHEL, I'd install, in so far as possible and practical, additionally packages, for both forward-compability on the older LTS, and for backward-compability on the newer LTS, that ease migration from the older LTS to the newer LTS.
    Last edited by indepe; 17 December 2016, 11:21 AM.

    Leave a comment:


  • indepe
    replied
    Originally posted by starshipeleven View Post
    1. an old kernel with backported secuirty fixes is safer than a newer kernel, because it lacks new stuff that could have more bugs.

    2. see my above posts about why RHEL does not need newer kernels
    Not that I am expecting to tell you anything new, but newer technologies can be more reliable than old technologies.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by timofonic View Post
    WHY???!?!?!??
    1. an old kernel with backported secuirty fixes is safer than a newer kernel, because it lacks new stuff that could have more bugs.

    2. see my above posts about why RHEL does not need newer kernels

    Leave a comment:


  • indepe
    replied
    Originally posted by bridgman View Post
    Not sure I understand. There are newer major releases of Ubuntu LTS, RHEL and SLES coming out on a regular basis as well with new kernels - enterprise customers choose to stay on the older major releases / kernels, they are not forced to do so unless there is some nuance I missed.

    That said, my impression is that new userspace is at least as big an obstacle to adopting a newer distro stack as dealing with a new kernel, probably more if anything.
    I don't have an issue with some customers staying on older versions (although I wonder how much more work they will have to do, once the release lifetime does end, which it will.)

    My point is that those older releases (especially past 3 years or so) will need to do their own maintenance, and not, for example, ask the latest upstream kernel to support 10-year old versions of GCC. (10 years of support is the commitment RH is making.)

    [That is referring to the original topic of this article. Apparently your statement is not connected to that anymore.]
    Last edited by indepe; 17 December 2016, 06:50 AM.

    Leave a comment:


  • bridgman
    replied
    Not sure I understand. There are newer major releases of Ubuntu LTS, RHEL and SLES coming out on a regular basis as well with new kernels - enterprise customers choose to stay on the older major releases / kernels, they are not forced to do so unless there is some nuance I missed.

    That said, my impression is that new userspace is at least as big an obstacle to adopting a newer distro stack as dealing with a new kernel, probably more if anything.
    Last edited by bridgman; 17 December 2016, 06:27 AM.

    Leave a comment:


  • indepe
    replied
    Originally posted by bridgman View Post

    There *is* a policy about having newer kernels - "stay on the old one". You just don't like the policy

    These are enterprise distros running on servers so most of the newer things you mentioned do not apply, except for cases like remote visualization/rendering. That said, those remote graphics cases are becoming more important - RH has been updating the drm (graphics) subsystem in older kernels for quite a while now, which is a pretty good compromise between "them that wants stable kernels" and "them that wants new (graphics) features".
    So far, to me it doesn't sound like good compromise for Fedora. Ubuntu and/or Android will keep pressing forward, and if the kernel devs want to stay attached to software 10 years in the past, then Ubuntu/Android will create a forward fork of the kernel the same way RH now has a backward fork. So the gap between Fedora and Ubuntu/Android will widen, and desktop/laptop/phone/tablet users will move forward this way, not that way. Eventually Linux will become two separate operating systems.
    Last edited by indepe; 17 December 2016, 04:44 AM.

    Leave a comment:


  • ldo17
    replied
    Originally posted by timofonic View Post
    Why so much ancient stuff?
    The usual answer is “because of something proprietary”. There was some piece of software or a custom interface to some piece of hardware that was developed years ago, and the company that created it has gone out of business, or discontinued that product, or wants to charge an arm and a leg to upgrade to the new version. The user figures “if it ain’t broke, don’t fix it”. That is, until it breaks...

    It’s an attitude that invariably leads to trouble. Do you consider IT a strategic asset to your company, or just an unavoidable expense? If it was strategic, you would not let yourself be maneouvred into a corner like this.

    Leave a comment:


  • bridgman
    replied
    Originally posted by timofonic View Post
    Why so much ancient stuff? There should be a policy about having newer kernels.
    There *is* a policy about having newer kernels - "stay on the old one". You just don't like the policy

    These are enterprise distros running on servers so most of the newer things you mentioned do not apply, except for cases like remote visualization/rendering. That said, those remote graphics cases are becoming more important - RH has been updating the drm (graphics) subsystem in older kernels for quite a while now, which is a pretty good compromise between "them that wants stable kernels" and "them that wants new (graphics) features".
    Last edited by bridgman; 17 December 2016, 03:44 AM.

    Leave a comment:


  • indepe
    replied
    It appears the last time a new iso-file for Ubuntu 10 was produced, was in 2012. SLES11 however is still active. So it seems there are two, RHEL6 and SLES11, not four?

    Are those two asking for functions like ACCESS_ONCE to be changed in that manner? Do they want upstream to support a 10-year old version of GCC? (Life cycle of RHEL.) Or does RH want Fedora with Wayland and Vulkan to have a kernel using modern compiler technology?

    [Edit: 2012, not 2010]
    Last edited by indepe; 16 December 2016, 11:37 PM.

    Leave a comment:

Working...
X