Originally posted by chuckula
View Post
Announcement
Collapse
No announcement yet.
Arch Linux Finally Rolling Out Glibc 2.27
Collapse
X
-
Originally posted by schmidtbag View PostI agree, but the big difference here is Debian-based distros are notoriously difficult to fix breakages. If something breaks, it's a serious headache, relative to how another distro (like Arch) would handle it. This is usually because the package manager is trying to prevent the user from digging themselves into a deeper hole, but sometimes the user actually knows what they need to do while the package manager is holding them back. The irony here is sometimes you can just run "apt-get upgrade", blindly agree to the changes, and find dozens of your programs have been un-installed, or your system no longer boots beyond the command line. For something that seems to prioritize idiot-proofing, this is a starkly common user-unfriendly situation. That being said, I tend to use Debian Testing (the computer I'm writing on this is using that), since it's relatively new and isn't very prone to trash my whole setup.
For example, i recommend Testing only once it is Frozen (only these around 6 months timeframe), as that on that moment become next-stable or if you want Stable Alpha let say Basically "ready for testing", someone who plans to use that now next-stable should start testing that, that is why this development branch is called testing in the first place - it is for testing next-stable.
Testing does not really roll and have no security model or at least to say it have worse security model I have no idea, who wanna use that if not just for testing during freeze period, some might only use it if they base their distro on it and wanna do it earlier...
For example Google uses Debian Testing for their internal GLinux distro But that is just playing with words, they don't use Testing... they just pick packages from Testing and they tested it before inclusion in their distro. It is clearly not to be used branch of Debian, but to be tested It is for testing it and to file bugs, before release happen.
If someone can't manage Sid and don't wanna do testing and to file bugs, he should use Stable .Last edited by dungeon; 20 April 2018, 11:58 AM.
Comment
-
Originally posted by dungeon View Post
Prerelase of glibc 2.27 was in Debian experimental repo for about 2 months before it entered Sid, what do you think that does there? Do you maybe still think how that is pushed without any thinking?
And if you want to know, new glibc would be pushed in Sid even around release date, but Ubuntu was in freeze, so as soon as that was unfrozen it immedeatlly entered as was really ready 3 months ago even I know that, as i was testing it for about that speed at that time about 3 months ago
- Likes 1
Comment
-
Originally posted by andrebrait View Post
There's a huge difference betweeen pushing things in an recently unfrozen prerelease or to the unstable repo...
Let say, Ubuntu for Debian is an _user_ of Debian. Valve is an _user_ of Debian. Google is also an _user_ of Debian... so sometimes our users (who are often also packagers) have certain priorities.... well, sometimes things does not roll stright because of these users and their plans
the unstable repo, where breakages are not only justified but expected. and pushing it to your "production-ready" branch which happens to be a rolling-release.
Arch also has a experimental branch, they just took more time to test and I doubt they have the manpower of Ubuntu and Debian for that too.
Arch does not need much developers, as no one do there things like Debian's testing, stable, old-stable, lts... kind of things. Rolling Release depends on good will of upstream, most of the time things goes AS_IS on x86 but sometimes that is not there... and as soon as that happens a bit you have a littleBig delay, isn't itLast edited by dungeon; 20 April 2018, 12:39 PM.
Comment
-
Originally posted by dungeon View PostI never understood why some people use Debian Testing not for testing, i really never "use" that Either Stable or Sid.
Testing does not really roll and have no security model or at least to say it have worse security model
If someone can't manage Sid and don't wanna do testing and to file bugs, he should use Stable .
Debian Stable isn't practical for a lot of people. It's perfectly fine (if not, recommendable) for a server or serious workstation, but for the average home PC, it's inconveniently outdated.
That being said, for my home PCs, I use Arch. For my work PC and home server, I use Debian Testing. For another server I put together, I use Debian Stable.
Comment
-
Originally posted by dungeon View PostSecurity? Of course, it goes right there. There is no artifical delays, other than potentionaly human delay factor.
Also, though Sid may have the latest security patches that you get immediately, it also has the least testing done, meaning new vulnerabilities could be discovered. And of course, there's the potential for other regressions and instability that come with it.
Comment
-
Originally posted by schmidtbag View PostAnd yet you also advocate for Stable? Don't you realize that has a human delay factor, too?
but it increases the chances of human error.
AMD won't tell you how much are broken on average and Bridgman in particular won't ... but I will tell you and everbody should know that how that really exist, so just RMA that fresh looking very new CPU Out of billions tranisistors some have errors and various flaws always.
I mean really, errors are possible everwhere not just in human, but in nature everywhereLast edited by dungeon; 20 April 2018, 01:12 PM.
Comment
-
Several years ago I did use Debian Sid and I just remember how terrible it was too use and how easy it could break. Packages have been updated slowly often enough, so that's where experimental would have to be enabled. Oh man, when I first tested Arch Linux it was such a relieve. Everything worked as it should. Breakages happen, but it is usually because of upstream bugs and this is something that I consider not as Arch's fault. If for example a KDE app doesn't work as it should, then bugs need to be fixed there in time and it shouldn't delay the development with outdated bug reports for software that is several years old.Last edited by R41N3R; 21 April 2018, 03:50 AM.
- Likes 1
Comment
-
Originally posted by dungeon View PostYes, of course that has human factor too. And that is the only factor for both Stable and Sid. On top of the human factor Testing have planned factor and no one care if some package is stuck because of bug you might not care or is in transition sometimes even for months
Also, Testing doesn't really get packages that are stuck due to a bug; if something is very broken in Sid, it doesn't trickle down into Testing. Meanwhile if you use Sid and something is broken, you're kinda stuck with it, for however many days or months that may be. I've never had a long-term issue using Testing, but I have had them from Sid. Meanwhile, I've had usability issues or missing features due to how outdated Stable is.
I firmly believe that Testing is a good middle ground, for desktop users seeking a semi-modern and mostly reliable system, anyway.
Comment
Comment