Announcement

Collapse
No announcement yet.

A One Line Kernel Patch Appears To Solve The Recent Linux + Steam Networking Regression

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    From my point of view the entire discussion is just silly.
    An error occurred in the kernel - it propagates through to distributions - and everybody is crying:
    "no one looks to the desktop" or "Ubuntu got the patch - we need a new distro".
    It is just against logic.

    First the test suits should run by kernel community in first place and all stake-holders are obliged to put tests for Linux functionality they rely on.
    This includes Valve with steam like server applications etc.
    So 1st one to blame is steam if this was not regularly and automatically tested.
    If it is in the test, one should make sure that those test scenarios are considered _before_ a test is recommended "... should update", right?
    The distros may have additional tests, mainly due to the special versions of programs they bundle with said release - that's it.
    And for a desktop oriented distro like Ubuntu that try to get patches from upstream in as fast as possible - considering new hardware, special problems/regressions fixed and so on - they must rely on the kernel community.
    And as stated recently on Phoronix, Ubuntu is using stable trees (or the patches therein) for their kernel (plus additions from vendors - probably).

    People saying RHEL when considering desktop - sorry, but they don't know anything.
    As being RHEL certified one should note that RHEL is quite old - tested and robust - perfect for servers, but sorry, those making jokes about Debian stable to carry historic versions should be very careful with other long term distros out there - using plenty of self-made patches for long gone versions not considered by upstream since plenty of years - cuff. But it works for servers - and that's its use case - fine.
    But if you want latest patches and regression fixes - even latest releases of main attacked programs (browser, mail client, shell, ... you name it) - and new HW support as you need them (like soon be needed for next Navi - you _must_ rely on automatic testing and be prepared to use PPAs and struggling as long as a well tested ground has been established (by vendors, community and user feedback) - who is willing to wait more than a year to use that HW on Linux?
    For me this distribution is still Xubuntu - and Debian as a second choice.

    And one of the finest things in this discussion: Linus himself took care for the community feedback to make sure the patch is valid for all Valve gamers (which is not _that_ much, sorry) - as he is responsible for the quality of Linux and gives the last say by using it in his tree for possible backports for older kernel releases thus fighting regressions - and he got blamed to do so.
    That's a really crazy world I would be ashamed to live in. That people should get sober again.
    I hope common sense and logic will be restarted on all the brains where those recently crashed.

    And Valve/Steam using a distro not used by Desktop users - they can only use Mint or Debian it the want to leave Ubuntu, as Fedora, OpenSuSE, Arch ... you name it ... are not mainstream desktop systems and I would not recommend them for users switching from Windows or macOS either.
    Considering the timing: Navi needs patches being in 19.10 with a lot of luck - so not supporting it means you need PPAs and thus a moving target as reaction of being upset about lack of testing. Maybe steam is not interested in early Navi users ... ups, one of their main targets ...
    Debian and its derivatives are ruling the desktop by number right now - but it can change. But I don't see any change going in another direction.
    Similar discussions about those wannabe new kids on the block - fancy programs/techniques being superior in any way (like was said about Alpha CPUs in the 90-ies, IA-64 getting name of Itanic, ZFS whitout FS checker on _servers_ in the first years of existence not used on the biggest European Sun server site [I had worked for], ...).

    Currenty Wayland is not ready for the mainstream desktop - I hope it will be soon - it has advantages which _may_ pay out.
    Btrfs may be used by many - but ext4 is extremely dominant by number which is astonishing - but still reasonable - as proven by latest Phoronix performance tests. There is no other FS out there being that robust as ext4 is.
    Blaming ext4 for a seldom bug - and suddenly all FSs are hit but only ext4 was numerous enough to get good problem reports - this shows what FS is most relyable and which FS should get the most care right now when needed.

    So currently we need to keep a stability focus on ext4 and X - till something really _IS_ better and thus got mainstream.
    And pushing those wannabes into main distros will only weaken those distros. Testing ground are not mainstream ... so I like Fedora doing the initial work and Red Hat considering paper cuts - but Ubuntu LTS should be conservative, but not considering HW - as still be used on desktops.
    One can not ignite a candle on both sides without having problems ... and we all have choice to determine our requirements for stability and freshness and select the fitting distro. And I don't think for early Navi adopters there is a better chance to get working support but with Ubuntu 19.10 (and maybe plus PPAs).
    But this has to be seen - time will tell us, so no need for that discussion right now.

    From my perspective all went well - but of cause it can work better and for sure many people will consider what lessons to be learnt from this case.

    I call this common sense - the same to stress automatic testing to get fixes and HW support to the masses in a rush as is typically required for the desktop.
    Is there any technical expert who can disagree?

    Comment


    • #22
      Originally posted by birdie View Post

      You're contradicting yourself in far too many places which further proves the fact that QA/QC in Linux is basically a swear word and no one is responsible for anything.
      It's funny to hear this from someone who doesn't know what he's talking about. You already prove you're just talking bullshit all the time.

      Comment


      • #23
        JMB9

        How the hell did you come to conclusion someone mentioned RHEL as a desktop distribution?

        Comment


        • #24
          Volta
          In a desktop focussed discussion people mentioned RHEL as a distribution having proper QA (comment #14; responding to the kernel not having QA) and I just summed up all the wide varying points in this forum thread. QA for servers can be much more conservative so RHEL can have much better testing than any desktop product can reach. Especially as there are tests running months to get those seldom errors ...

          On the other hand Red Hat had long ago desktop focused enterprise product for automotive branch which had been quite conservative - and I got an inside when I worked in that domain (but being responsible for AIX workstations at that time) around 2004. So desktop and conservative QA similar to RHEL is nothing which would be unheard of for desktop products - but I am sure with the current pace of HW generations in graphics domain and the amount of testing and finding bugs and regressions all may come to the conclusion that updates concerning HW enablement every half year (or earlier) is a neccessity - despite the possibility to introducings bugs.

          And RHEL and its derivatives are in use as desktop distro ... like Red Hat Enterprise Linux Workstation.

          Similar to Canonical using LTS release and its point sub-releases bringing new kernel, mesa, X for HW enablement (tested before via STS releases).
          That discussion is similar to those about the stable tree which gets updates quite often - which is a good thing for most consumers of that tree.
          And kernel community talking about distros using their stable branch and recommending following the kernel development more closely.
          But a bug coming through (even by fixes - not only by new functionality required by new HW) is always a seed for discussing more conservative practices ... for best practices / lessons learnt.

          Comment


          • #25
            Originally posted by milkylainen View Post
            I think Pierre-Loup is wrong about where primary regression testing for KERNEL CHANGES should occur.
            If every desktop distro went about and tested everything from scratch it would be endless in trying to cover everything.
            Agreed on the idea. Same reason why we like having the source, finding and fixing our own issues if possible. No way people are testing every program on the desktop to make sure it doesn't break. It's impossible to go through and explicitly test everything all desktop programs use. Fuzzing the source helps a ton, but that still isn't explicitly testing anything except the source its self.

            Comment


            • #26
              Originally posted by birdie View Post

              You're contradicting yourself in far too many places which further proves the fact that QA/QC in Linux is basically a swear word and no one is responsible for anything.
              And yet you're not able to answer my question. It seems you're ignoring the fact it usually takes Linux few days to provide fix. When comes to Windows it takes months, but I'm always seeing you whining about Linux and supporting Windows. So, my conclusion is simple: get lost, troll.

              Comment


              • #27
                Originally posted by milkylainen View Post

                Because it's fucking impossible to fend of the constant stream of fixes to the kernel and figure out what would be relevant to whom and where.
                It's like what? 5k commits per minor stable? Yeah. Good luck sorting that stream of commits out.
                It's distributions developers who decide what kernel and userspace they take. They're responsible to test if everything work as intended. Do you expect kernel, gcc, libc, systemd, KDE, Qt developers to test every possible combination and guess what Ubuntu will mix next month? Insane.

                You upgrade the kernel, trusting the kernel devs to to their part of the chain.
                I wouldn't call "TCP related security fixes" as server only fast-forwards.
                Breaking TCP behavior be it for Steam or some other applications is bloody relevant for user space applications.
                You just can't break fundamentals or userspace API and go all shitfaced about it like it didn't happen.
                Yep, but it seems nobody from Ubuntu cared and they simply copied the kernel from kernel.org and threw it at their users without ANY testing. That's the spirit!. Let me rephrase this for you: You upgrade the kernel, trusting the Ubuntu devs to do their part of the chain.

                You either know and take an informed decision with proper heads up or this kind of things will happen.

                Obviously, this isn't a guarantee for anything really.
                Damn, it seems Linus and company forces Ubuntu to use the latest kernel and he doesn't even gave the guaranty it won't break Steam which is damn important to RHEL, Oracle, Intel and AMD developers. Nobody cared! Seriously, if Ubuntu cares they should have their kernel team working upstream already. If they don't.. well get lost.
                Last edited by Wojcian; 22 June 2019, 12:03 PM.

                Comment


                • #28
                  Regression testing every app is a huge undertaking. Michael and other users caught this one, but who is responsible for kernel regression testing something like libreoffice, lutris, firefox, etc? Does any other OS do regression testing per app? Windows, Mac.

                  I'm not sure crapping on the distribution middle-man is where the solution is. None of them ship with steam. It's hard to know how to solve this other than with user feedback or a separate tier 2 kernel validation batch/benchmark that would make Michael's servers slow to a crawl.

                  Comment


                  • #29
                    Originally posted by pmorph View Post
                    What I always find fascinating are people who see a missed special case as "no any QA". Typically such comments come from from individuals who know next to nothing about QA.
                    Originally posted by Volta View Post

                    It's funny to hear this from someone who doesn't know what he's talking about. You already prove you're just talking bullshit all the time.
                    Wow, looks like there's a cult of people here who hit like on each post which is contradicting what I say. Amazing!

                    Meanwhile I work in IT and we do employ extensive unit tests and other methods to guarantee that our product will work for our customers despite using agile development. We've never had major f ups like the ones which keep on happening in the Linux kernel and which have already led to data loss on several occasions.

                    "Do you even know what you're talking about?" - and that's pretty much it. I'm glad both of you are trying to look smug while at the same time having absolutely nothing to say. Quite typical (open source) trolls.

                    Originally posted by Wojcian View Post

                    And yet you're not able to answer my question. It seems you're ignoring the fact it usually takes Linux few days to provide fix. When comes to Windows it takes months, but I'm always seeing you whining about Linux and supporting Windows. So, my conclusion is simple: get lost, troll.
                    Stop BS'ing me. The latest bug in the kernel which led to data loss took two months to be fixed.

                    The kernel bugzilla contains literally hundreds of open regressions some of which are already several years old.

                    You must be f-ing out of your mind to believe that regressions are instantly/quickly solved in the kernel.

                    Some of them indeed do: very high profile ones. Like the one which is being discussed. But that's the exception, not the rule.

                    Bloody Linux fan attics who have never filed or debugged a single issue in the Linux kernel. Do you even know what git bisect is? Have you ever run it? Can you even imagine how difficult it is to debug the kernel and find the source of regressions in it?
                    Last edited by birdie; 22 June 2019, 04:23 PM.

                    Comment


                    • #30
                      I will take a break from these forums for a while because it's cringe-worthy to read all the comments from the people who've decided that Linux is perfect, Ubuntu makes no mistakes, the Linux desktop is good everyone.

                      Comment

                      Working...
                      X