Announcement

Collapse
No announcement yet.

A Few Worrisome Regressions Appear In Ubuntu 15.04 vs. 15.10 Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Arch works good if you stick mainly with FOSS drivers across the board and maybe Nvidia.

    But things can break, especially if you use Catalyst. Always important not to upgrade major components without first checking the site news section and forums for possible problems and workarounds.

    For instance... it appears Gnome 3.18, which was pushed to Arch the other day is broken with Catalyst 15.9 but works with the FOSS drivers.

    Comment


    • #42
      If kernel is the cause of problem this will happen on all linux platforms. It's not ubuntu question.
      Last edited by Azrael5; 11 October 2015, 12:08 PM.

      Comment


      • #43
        Originally posted by asdfblah View Post
        Why are (some?) arch users so annoying and stupid? Always derailing threads with their ignorant and stupid elitism...
        Some, far from all. I run Arch and I'm happy with it and yes, I like running more bleeding-edge software and I've personally only once experienced a breaking upgrade. But I don't get the elitism some Arch users show. As far as I'm concerned, everyone should choose what works well for them. Whether that be Arch, Ubuntu, Fedora, Windows, Mac OS X or even DOS. Something about 'tool for the job' also comes to mind...

        Comment


        • #44
          Don't they say that Slackware is the only true Linux? =p

          Comment


          • #45
            Originally posted by kaprikawn View Post

            The distro pi$$ing match on display here is really quite depressing. Can't we just agree on the maxim 'different strokes for different folks'.
            After 375 posts you haven't figured out half the posts here are about "my favorite distro/video card maker is better than yours" and generally "I know better, don't tell me otherwise"? Skip those and then these forums are actually interesting.

            Comment


            • #46
              Originally posted by mmstick View Post
              It's very rare for a service that has already reached stability to drop a feature without warning that that feature will be deprecated well in advance.
              On other hand, tracking warnings and changelogs for hudreds or even thousands of libs, programs, and whatever could be time consuming and overall it s daunting task. At the moment I have more than 2000 packages on my desktop and I'm really not in mood to track all their changelogs as something mandatory.

              If you've did major version uplift on 100 libs and 100 progs to "these new cool versions", you have to forget all previous testing and re-validate attitude of whole system to see what has fell apart, how these versions interact each other and so on. From engineering perspective its a task worth of whole testlab. Needless to say, if I need just some OS to support my tasks, I do not dream to turn into sometnig betwen testing lab and maintainer.

              It isn't hard to make small changes where they are needed to keep services up-to-date throughout the years.
              Sure, its not hard for one service. But when you've got 2000 packages, it starts to take a lot of time on garbage activity and there are no real benefits wihch justify such a waste of time. Its like doing maintenance, but not for few packages which will be used by many people, but for whole system which would hardly land on several PCs at very most.

              The idea of that Gentoo or Arch has breakages is silly. Gentoo, especially, tests their ebuilds heavily before unmasking them for the entire userbase.
              You can't test all possible combos. Go learn combinatorics and you'll see: amount of possible cases to test explodes to unimaginably huge numbers. You can't test everything.

              Whole point of locking libs and progs versions between releases is to try to deal with this issue. Partially. At least, more or less popular combos of typical software would get some chance to be actually tested. Yet, it is incomplete coverage. And you can get hit by bugs anyway. Especially when you're using unusual program or configuration.

              Hint: usually configuration similar to dev's (or maintainer's) computer works best. Because at least 1 human at least partially tested how it performs :-).

              Arch tests packages in testing repositories for at least two weeks before doing the same.
              Do you honestly think you can test all possible interactions of all libs and progs in all possible configurations in 2 weeks? If you have 5 possible versions of libA and it depens on libB with another 5 possible versions, and you have 3 versions of program using libA, it means at least 5 * 5 * 3 = 75 test runs to check all possible combos. This is very simple "hello world" example, which does not even takes a look on system libs, kernel, etc. In real world you stand no chance at all: number of possible combinations quickly skyrockets to numbers you can't test even spending your whole life on doing so.

              So its all about reducing this complexity and trying to get best possible coverage while minimizing efforts. Approach taken by Debian and Ubuntu is one of these techniques.

              If you find that something breaks on Arch, it's no different than finding an issue with a new release of Debian, Ubuntu or Fedora.
              Arch would dare to replace init with systemd on the go, and system would break, esp. if user does not reads readme. Ubuntu and Debian would defer such changes until new major version of distro released. Making it both safe to apply updates within release without grinding shitload of changelogs. With Debian and Ubuntu one can actually KNOW when things would need extra supervision and these cases are rare. That's the whole point of versoin freeze. Sure, its not silver bullet. But partially solves some of mentioned issues.

              Of course it is not about Debian testing or unstable repos: this way one can get something Arch- or Gentoo-like on Debian and enjoy by all pitfalls of semi-rolling distro

              Believe it or not, I've had breakage happen just by upgrading to the next LTS of Ubuntu. You can't expect to never maintain your system.
              Major upgrade of distro version is a point where all breakage have to happen. Ideally, there should be no breakage at all. But from realistic standpoint IMHO I would be better with major breakages isolated to certain point of time where I can supervise it and make sure I do it when it causes minimal impact. And actually, its possible to "prototype" upgrade on similar config to get idea what would fail. That's how serious production environments are doing major upgrades. And it would be easier in distros like Debian and Ubunto to my taste. Debian even got this idea and bothers self with reproducible builds, so one can actually count on the fact they tested what they planned, not something else.

              Rolling distros are inherently more troublesome and poorly tested. But they have more recent software versions. This is some tradeoff. And in no way its some silver bullet, in somce cases it does not works well. Say, ancient MESA would be crap compared to more recent one. And once it is known, there is little gain to test it further. Not to mention Debian policies aren't dealing with this corner case reasonably well. Ubuntu devs trying various things to make it better though. And even Debian got "backports". These are attempts to augment issues resulting from chosen approach,

              In addition, we do more than just push buttons because we can. Some of us like to have more control over our system, and a greater understanding of how it all comes together.
              I can understand it and I think its good for person to develop self and learn new things. However, you tell it like if you pretend you can't do it in other systems. This is wrong. I do, for example, task-specific Debian re-spins to run in embedded systems. Needless to say these are very specific and customs things and I even bother myself to build custom kernel which comes with proper defaults, targeting systems running without human supervision at all. There is even nobody to hit reset. Except watchdog ofc.

              Having a better package manager is a huge incentive to use Arch and Gentoo, as there are a lot of shortcomings to DNF and APT which Pacman and Paludis/Portage
              Speaking for myself, I like Debian and Ubuntu because single uniform set of tools allows to get both binaries and source in universal way. Enjoying by all nice features. Say, authentication. I can check if source I've got comes from maintainer, not just some random MITM on the way. Same goes for precompiled binaries.

              And you only have to build stuff yourself only when you really have got good reason which justifies it. Though I would somewhat agree there is room for improvement.

              In example, can you install the entire KDE suite on Ubuntu alongside GNOME, then later decide you don't want it and completely uninstall it without worrying about leftover dependencies?
              In fact its a whole point of package management! And whole reason to put efforts on packaging. And yes, you can do it. And it would perform reasonably, compared to many other solutions one can think of. But package manager could default to safe choices and possibly could left some "auto-removable" libs or config files.

              But you can get rid of autoremovable libs in automated way. Same goes for configs. Though some programs can generate more runtime data and package manager can't know EVERYTHING program can do. If you want absolutely exact revert, you better to go for snapshots, these would discard ALL runtime data even if they were not listed anywhere as "configs" or so. But it both could be way too harsh and it much less precise in actions - you just grab time machine and go to past. Everything after this point no longer exists. It may or may not be what you want. Kinda different solution.

              That's my biggest problem with Ubuntu: inability to uninstall meta package dependencies.
              Metapackage is just a convenient shortcut to install (and later update) whole bunch of packages in one shot as a single entity which makes sense as some more or less complete feature. So if one just wants "KDE desktop" in one shot and does not cares much about details, that's what metapackages for. On other hand, if one goes for more flexibility, there is no need to install metapackage at all. You can grab minimal system and install only what you want. Or if you've got full KDE and want to trash some parts, remove metapackage and then you no longer have "all or nothing" and can fiddle with packages in more accurate ways, trashing some KDE programs, for example, etc. Sure, sometimes you can face strange dependencies between parts, etc (KDE has been dumb about it on their own, they lacked modular design in first place and only recently got idea it has been wrong approach). While one who want to do something at no matter what is cost can override package management in brutal ways, that could be an issue. And if we are about shortcomings, it is one of them - if you face some "strange" deps, getting rid of them could be challenging in some corner cases. Yes, it has been meant for production systems and keeping them working in first place. So breaking everything apart could be inconvenient and require uncommon actions. But in fact one can do it, if its what they want.

              How easy is it to get the script for building your packages and make slight changes to them to apply a patch or enable/disable features that you want/don't want? Arch makes this especially easy with makepkg and PKGBUILDs.
              Idea to apply patch to upstream version in automatic way suxx in terms of reliability: patch isn't guaranteed to apply to arbitrary upstream version. So I would consider it a uncommon and troublesome wish in first place. It means one not going for production system at all. But it isn't really hard to do something like this once you understand how package management works. Just do apt-get source, patch it, build it and so on. There're tools to get source, authenticate it, build it, etc. Glueing 'em together is up to one who writes script I guess. But sure, its not a common scenario which apt readily targets. And btw, if someone wants to just install some crap they built on their own and they damn sure they know better how to do it, its possible to do quick-n-dirty install with checkinstall. While in no way it is proper way to build package, it would at least allow to manage arbitrary program via package manager, being able to uninstall it without remembering all files yourself, etc.

              Also, I will call nonsense on your idea that Gentoo cannot lock versions, and the same for Arch. Gentoo can, in fact, lock versions,
              There is big difference between "can" and "do". In debian and ubuntu you have it as default policy. In arch and gentoo its up to you to turn into full blown maintainer and fiddle with all this crap yourself. That what makes debian or ubuntu order of magnitude more simple to run and saves a load of time on fiddling with all this crap. Because this job as well as many others is offloaded to maintainers.

              You may even create your own pacman mirror that only includes packages that you've tested yourself.
              And I can do the same for debian/ubuntu. So what?

              Comment


              • #47
                People will use whatever distribution they want. Each of them has their pros and cons. I prefer bleeding-edge even it occassionally means breakage. Some people like the comprimise Manjaro offers by being more bleeding-edge than Ubuntu or Debian but slightly more about stability than Arch.

                However, Arch doesn't just blindly use upstream packages, they will apply known and tested patches on-top of software that might misbehave or have regressions. This is why we see "-1" "-2", "-3", so on at the end of package versions with the same version number. These indicate that they've been recompiled with changes, often with patches that fix known issues. Furthermore, staying with packages that don't get upgraded for long periods of time isn't always the best, either. What if an old "stable" piece of software has major bugs that are only fixed in newer upstream versions? Can we expect the distribution developers to backport major fix changes to older versions when they are busy enough backporting security fixes?

                Furthermore, if things badly break there are facilities to easily downgrade packages and lock them so they cannot be upgraded. There are also things stable LTS kernels and 3rd party repos that host older Xorg releases, so that crap like Catalyst can continue to work. However, my experience has been things rarely break and if they do, it's usually proprietary drivers to blame and mainly Catalyst.

                Granted, it's not for everyone and many prefer the Debian or Ubuntu LTS approach. In fact, when it comes to desktop (or at least Linux gamers), more people prefer the Debian/Ubuntu-style approach. Arch-based distros come in at a clear 2nd place, however. Enterprises will greatly favor the even more stagnent Red Hat or CentOS system.

                Comment


                • #48
                  perf regressions in mesa, new kernels than doesn't start, etc, regressions are an inherent part of gnu/linux, I would just hope than devs will fix them as soon as they appear (supposing they are even aware of these bugs).

                  Comment


                  • #49
                    Tried it in two VMs (on OpenStack) and didn't find a significant difference. http://openbenchmarking.org/result/1...BE-1510118BE07

                    Comment


                    • #50
                      Originally posted by gQuigs View Post
                      Tried it in two VMs (on OpenStack) and didn't find a significant difference. http://openbenchmarking.org/result/1...BE-1510118BE07
                      Blah, so it seems again "happened for me" Michael's articles

                      Comment

                      Working...
                      X