Announcement

Collapse
No announcement yet.

Arch Linux Finally Rolling Out Glibc 2.27

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by chuckula View Post

    Having used Arch for a long time, shit most definitely does break. They do a decent job of trying to clean it up but sometimes when a bug gets introduced into an upstream package they will merrily pass the bug along to you and make it difficult to use the most recently working version of the package. The problem with the rolling release philosophy is that there's minimal regression testing going on to protect you from bad things that happen in the shiny newer versions of many projects. Glibc is probably an exception because it's so fundamental that a showstopper bug will crash everybody's system as opposed to more isolated regressions that don't affect everybody.
    I agree about Arch breaking. But I don't agree that rolling release = minimal regression testing. Solus and Manjaro are rolling, but they do regression testing and hold back packages if something breaks systems.

    Comment


    • #22
      Originally posted by schmidtbag View Post
      I agree, but the big difference here is Debian-based distros are notoriously difficult to fix breakages. If something breaks, it's a serious headache, relative to how another distro (like Arch) would handle it. This is usually because the package manager is trying to prevent the user from digging themselves into a deeper hole, but sometimes the user actually knows what they need to do while the package manager is holding them back. The irony here is sometimes you can just run "apt-get upgrade", blindly agree to the changes, and find dozens of your programs have been un-installed, or your system no longer boots beyond the command line. For something that seems to prioritize idiot-proofing, this is a starkly common user-unfriendly situation. That being said, I tend to use Debian Testing (the computer I'm writing on this is using that), since it's relatively new and isn't very prone to trash my whole setup.
      I never understood why some people use Debian Testing not for testing, i really never "use" that Either Stable or Sid.

      For example, i recommend Testing only once it is Frozen (only these around 6 months timeframe), as that on that moment become next-stable or if you want Stable Alpha let say Basically "ready for testing", someone who plans to use that now next-stable should start testing that, that is why this development branch is called testing in the first place - it is for testing next-stable.

      Testing does not really roll and have no security model or at least to say it have worse security model I have no idea, who wanna use that if not just for testing during freeze period, some might only use it if they base their distro on it and wanna do it earlier...

      For example Google uses Debian Testing for their internal GLinux distro But that is just playing with words, they don't use Testing... they just pick packages from Testing and they tested it before inclusion in their distro. It is clearly not to be used branch of Debian, but to be tested It is for testing it and to file bugs, before release happen.

      If someone can't manage Sid and don't wanna do testing and to file bugs, he should use Stable .
      Last edited by dungeon; 20 April 2018, 11:58 AM.

      Comment


      • #23
        Originally posted by dungeon View Post

        Prerelase of glibc 2.27 was in Debian experimental repo for about 2 months before it entered Sid, what do you think that does there? Do you maybe still think how that is pushed without any thinking?

        And if you want to know, new glibc would be pushed in Sid even around release date, but Ubuntu was in freeze, so as soon as that was unfrozen it immedeatlly entered as was really ready 3 months ago even I know that, as i was testing it for about that speed at that time about 3 months ago
        There's a huge difference betweeen pushing things in an recently unfrozen prerelease or to the unstable repo, where breakages are not only justified but expected, and pushing it to your "production-ready" branch which happens to be a rolling-release. Arch also has a experimental branch, they just took more time to test and I doubt they have the manpower of Ubuntu and Debian for that too.

        Comment


        • #24
          Originally posted by andrebrait View Post

          There's a huge difference betweeen pushing things in an recently unfrozen prerelease or to the unstable repo...
          I think you misunderstood something there... Ubuntu was not in _their_ Freeze yet at the time and since they importing from Sid new glibc (regardless that it was ready) can't be uploaded to Sid until Ubuntu stop importing from Debian Dunno how better to explain that.

          Let say, Ubuntu for Debian is an _user_ of Debian. Valve is an _user_ of Debian. Google is also an _user_ of Debian... so sometimes our users (who are often also packagers) have certain priorities.... well, sometimes things does not roll stright because of these users and their plans

          the unstable repo, where breakages are not only justified but expected. and pushing it to your "production-ready" branch which happens to be a rolling-release.
          Users are priorty in Debian, not rolling per se. Yeah, Debian is not rolling release, but to say user-release, rolling is not priority just secondary but natural thing.

          Arch also has a experimental branch, they just took more time to test and I doubt they have the manpower of Ubuntu and Debian for that too.
          What is Arch's experimental branch does not exists in official Debian repos. Debian's experimental is kind of what is testing repo for Arch. Debian Testing is entirely something else, something out of scope of Arch.

          Arch does not need much developers, as no one do there things like Debian's testing, stable, old-stable, lts... kind of things. Rolling Release depends on good will of upstream, most of the time things goes AS_IS on x86 but sometimes that is not there... and as soon as that happens a bit you have a littleBig delay, isn't it
          Last edited by dungeon; 20 April 2018, 12:39 PM.

          Comment


          • #25
            Originally posted by dungeon View Post
            I never understood why some people use Debian Testing not for testing, i really never "use" that Either Stable or Sid.
            Because as clarified before, Sid is unstable (both literally and by definition) and in Debian's definition, stable=old, and sometimes year-old software doesn't get the job done the way I need it to. Testing is "new enough" while being mostly stable.
            Testing does not really roll and have no security model or at least to say it have worse security model
            And Sid does?
            If someone can't manage Sid and don't wanna do testing and to file bugs, he should use Stable .
            I can manage sid, but it's a pain in the ass to do so; if you want cutting edge and rolling release, Arch is definitely a better option.
            Debian Stable isn't practical for a lot of people. It's perfectly fine (if not, recommendable) for a server or serious workstation, but for the average home PC, it's inconveniently outdated.

            That being said, for my home PCs, I use Arch. For my work PC and home server, I use Debian Testing. For another server I put together, I use Debian Stable.

            Comment


            • #26
              Originally posted by schmidtbag View Post
              And Sid does?
              Security? Of course, it goes right there. There is no these artifical planned delays, other than potentionaly human delay factor.
              Last edited by dungeon; 20 April 2018, 12:51 PM.

              Comment


              • #27
                Originally posted by dungeon View Post
                Security? Of course, it goes right there. There is no artifical delays, other than potentionaly human delay factor.
                And yet you also advocate for Stable? Don't you realize that has a human delay factor, too? Keep in mind that Debian's Stable devs patch a lot of their own packages. Not only does that mean there would be a human delay factor, but it increases the chances of human error.
                Also, though Sid may have the latest security patches that you get immediately, it also has the least testing done, meaning new vulnerabilities could be discovered. And of course, there's the potential for other regressions and instability that come with it.

                Comment


                • #28
                  Originally posted by schmidtbag View Post
                  And yet you also advocate for Stable? Don't you realize that has a human delay factor, too?
                  Yes, of course that has human factor too. And that is the only factor for both Stable and Sid. On top of the human factor Testing have planned factor and no one care if some package is stuck because of bug you might not care or is in transition sometimes even for months

                  but it increases the chances of human error.
                  Yes, human error is also possible... but no different how let say out of 100 Ryzen CPUs some are really broken, so user should do RMA

                  AMD won't tell you how much are broken on average and Bridgman in particular won't ... but I will tell you and everbody should know that how that really exist, so just RMA that fresh looking very new CPU Out of billions tranisistors some have errors and various flaws always.

                  I mean really, errors are possible everwhere not just in human, but in nature everywhere
                  Last edited by dungeon; 20 April 2018, 01:12 PM.

                  Comment


                  • #29
                    Several years ago I did use Debian Sid and I just remember how terrible it was too use and how easy it could break. Packages have been updated slowly often enough, so that's where experimental would have to be enabled. Oh man, when I first tested Arch Linux it was such a relieve. Everything worked as it should. Breakages happen, but it is usually because of upstream bugs and this is something that I consider not as Arch's fault. If for example a KDE app doesn't work as it should, then bugs need to be fixed there in time and it shouldn't delay the development with outdated bug reports for software that is several years old.
                    Last edited by R41N3R; 21 April 2018, 03:50 AM.

                    Comment


                    • #30
                      Originally posted by dungeon View Post
                      Yes, of course that has human factor too. And that is the only factor for both Stable and Sid. On top of the human factor Testing have planned factor and no one care if some package is stuck because of bug you might not care or is in transition sometimes even for months
                      I'm so confused... I thought you were saying Testing is bad because of the human delay factor, and now you're saying only Stable and Sid have it? Also, didn't you just say Sid doesn't have the delay factor?
                      Also, Testing doesn't really get packages that are stuck due to a bug; if something is very broken in Sid, it doesn't trickle down into Testing. Meanwhile if you use Sid and something is broken, you're kinda stuck with it, for however many days or months that may be. I've never had a long-term issue using Testing, but I have had them from Sid. Meanwhile, I've had usability issues or missing features due to how outdated Stable is.
                      I firmly believe that Testing is a good middle ground, for desktop users seeking a semi-modern and mostly reliable system, anyway.

                      Comment

                      Working...
                      X