Announcement

Collapse
No announcement yet.

New Group Calls For Boycotting Systemd

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by NotMine999 View Post
    systemd offers features that most desktop-oriented distributions would like to see.
    Originally posted by NotMine999 View Post
    What I hear a lot from experienced sysadmins is this: "systemd doesn't bring many benefits to me while making more work for me because I have to convert some custom app startup script to operate in the systemd format"
    Originally posted by NotMine999 View Post
    So systemd for desktop-focused deployments? Why not? For servers and busy sysadmins? Probably not.
    Hmmm... so you think sysadmins do not want to reliably kill services (incl. subprocesses that those have spun off)? What about managing resource usage of services (memory, cpu time)? Do sysadmins not need a reliable way to check the state of a service? Sysadmins do not want services to be monitored and restarted if they crash (or to signal them)? Neither do they care for well-working watchdog support for their server hardware?

    Systemd is literally packed with features that make a lot of sense for servers.

    Comment


    • Originally posted by Paul Frederick View Post
      What exactly is so awful about the present init system that it needs to be replaced? Is your system not booting up now? My i3 with a mechanical hard drive boots up in 5 seconds running a regular init.
      What exactly is so awful about the present display manager (xserver) that it needs to be replaced? Is your system not showing u a picture now? My i3wm with my mechanical keyboard starts up in less than 5 seconds running a regular tiling wm.

      And to be frank with such a 2d tiling wm that is better than anything else anyway Xserver is no Problem, so why would anybody else want to upgrade to wayland

      Comment


      • Originally posted by Karl Napf View Post
        Hmmm... so you think sysadmins do not want to reliably kill services (incl. subprocesses that those have spun off)? What about managing resource usage of services (memory, cpu time)? Do sysadmins not need a reliable way to check the state of a service? Sysadmins do not want services to be monitored and restarted if they crash (or to signal them)? Neither do they care for well-working watchdog support for their server hardware?

        Systemd is literally packed with features that make a lot of sense for servers.
        You don't need systemd to be able to perform any of those tasks.

        And your post is another example of typical systemd fanboy way of thinking: "before systemd there was nothing and UNIX sysadmin wasn't able to do their job properly". I'd say it's laughable, but it's actually pretty sad that we even have this discussion.

        Comment


        • Originally posted by nslay View Post
          I should also mention that single user mode allows you to use the system prior to mounting any file system (with / mounted read-only), with no login and theoretically no restriction on what you can run as a user (just a single console though). It's also very handy when you forget root's password and have physical access to the machine.
          Actually that is what initramfs does nowadays: It provides a minimal environment to fix issues due to mounting the root fs failed.

          Comment


          • Originally posted by prodigy_ View Post
            You don't need systemd to be able to perform any of those tasks.

            And your post is another example of typical systemd fanboy way of thinking: "before systemd there was nothing and UNIX sysadmin wasn't able to do their job properly". I'd say it's laughable, but it's actually pretty sad that we even have this discussion.
            Yeap, you do not need systemd for those tasks, but systemd makes all of them a lot easier than they used to be before. Give systemd a try and you will be cured of that misconception that everything is rosy in sysv-init land:-)

            Comment


            • Originally posted by sdack View Post
              If we really were to turn everything into binary form and also moved away from source distributions then we are getting nearer to closed source.
              If thats true, we should start to port everything except maybe the kernel to python or lisp and interpret it, instaed using c-libraries.

              having binary Log Files has nothing to do with binary SOURCE. And btw, when did u last time open /var/log/messages with a editor to search something in it? NO u use the file ONLY for commands like cat ... | grep... so where is the difference if u use cant+grep or logctl+grep (if you want).

              That argument sounds good but it is not... even if your argument would be true, it would be no good one, because it means stopping make linux any better so it can be studied on university, linux is not a primary academic os, it is a candidate to become the major operating system. And even it sounds cracy while we have maybe 2-3% that u find in some statics to the major os, but android has proven that its easy to come even from 0% to 80-90% in such markets in few years if the right stuff comes together.

              And yes linux is the best candidate to reach that many things happen right now that go in this direction, steammachines + steamos is one thing, btrfs (microsoft has not even planed a answer in long future, they will not do this in next years so they are lightyears behind in FS, I even question if they plan to concurent anymore or if they tacticaly calculated in mid term with loosing the OS war and stop investing money into it, just get how much money as possible with new versions with nearly no new features at all) and also this systemd packagemanager virtualisation stuff is part of it, wayland ok here we more reach the level of the other oses maybe go slighty ahead from beeing 20 years behind.

              In 2 to 5 years the only argument is left against linux is that maybe microsoft Office does not work native under Linux, gaming the other big argument against linux is just falling, and photoshop use maybe 10-20% illegaly under windows that will not stopp the masses and if linux has 10-20% market share adobe will port their shit faster than light.

              Maybe I am wearing a pink google or something like that, but at the moment I am 99% shure that GNU/Systemd/Linux will be the most used Desktop-computer OS in 5 years. And I would have never said that the last 10 years till now.

              INIT-SCRIPTS:

              I dont get that argument, systemd makes it better and even today its more likely that u get a systemd boot script than for any distro, why? because its distro-independend config files. Take mpd take deluge-daemon.

              For ubuntu for years u only could copy paste from a forum site from deluge a initscript and do everything manual, in fedora yum install its there, its the same for archlinux, and it will work in debian and ubuntu when they switched form day one.

              And is that not good for sysadmins to become not vendor-locked in? I think thats one of the most important points for a sys admin to become not to dependend to one vendor.
              Last edited by blackiwid; 04 September 2014, 07:51 AM.

              Comment


              • Originally posted by nslay View Post
                And tell me, what would you do with your desktop environment? Open a terminal and run the same commands you would in single user mode? Spare me. There's nothing obsolete about single user mode, just ignorance about its existence and use.
                perhaps.. although I think more distro's have enough in their initrd to recover from trashed root partition these days, which kinda suggests that the /usr split is not so important any more as it was back in the olden days

                and afaiu systemd replaces numeric runlevels with targets (ie. graphical, multi-user, single-user, etc).. kinda the same concept, but should more flexible

                Comment


                • Originally posted by blackiwid View Post
                  having binary Log Files has nothing to do with binary SOURCE. ...
                  No. You are twisting things around, because you do not understand why we have them, and because spinning them around does not give you an answer is your conclusion to throw it all away. It is however not a sane approach to solving problems.

                  We have always relied on human readability as well as backwards compatibility and for good reasons. Is cat and grep a program? Yes. Is text just a sequence of bits and bytes? Yes. It is however no reason to throw it away and to reinvent it differently, because then we can do the same with everything else, past, present and future software, including any new implementations. Nothing will stop us from doing it over and over again. It will simply not lead anywhere when it only leads in circles. This is why we preserve as much of the old as possible, why we only fix what is broken and why we build on top of it. Only once you understand this can you understand how progress is made.

                  systemd works, it does its job well, but it is also a step backwards. Once any of its dependencies become outdated and need replacing, and it will happen, will you see how much of a problem it actually is. Perhaps you now think we should not change it in the future and then only fix what is broken and then to build on top of it, but you will not do it, because what you do not learn now you also cannot use in the future.

                  To spin it as far as to think one could win against Microsoft shows how deep you have fallen into the illusion, because Microsoft software never has been great to begin with. However, they do know how to influence politics, users, sales, companies, etc. to become one of the biggest de-facto monopolists the economy has ever seen. To believe that if only one could write perfect software, which does not exist by the way, one could also beat Microsoft is naive. We will more likely see the desktop PC die before this happens, because while Linus still dreams of conquering the desktop have others been working around it for a long time now. Just see where Linux can be found these days. Most of us do not really want a dominance. We only want Linux and do not care for economical or political power.
                  Last edited by sdack; 04 September 2014, 10:16 AM.

                  Comment


                  • Originally posted by sdack View Post
                    Your approach requires a second partition or a second drive just the same. So you are doing it the way it has been done for decades under UNIX/Linux. Now imagine you had to administrate hundreds of machines within a team. Eventually you will want to have a solution with every machine and not just search for the right flash drive with the right software first. It is then easier to use a small partition or a small disk inside the machine for this purpose. It allows to do administrative tasks from remote, possibly automate them and avoid having to walk through every floor of your company and to go from room to room to do your job locally at every machine each single time.

                    UNIX contains a lot of experience, but it is understandable that it is not always needed for someone who just wants to use it all alone at home and just on a single machine. In fact you do not even need a multi-user OS for a lot of cases. Still, I do not think one should abandon these concepts all together only so that a some people can turn their computers into a gaming console. I think it is better to have an OS that allows for a very wide range of use cases and so that all the people who are using it can profit from the experience of one another. Throwing away such concepts for the sake of gaining a couple of seconds in boot time may turn out as unwise, especially when it could have been implemented side by side, but this is also the reason why young people are doing it - to learn about the consequences. The young rebelling against the old is a concept as old as the stars and allows everyone to come out the wiser.
                    It has been considered best practise in the IT industry this century, never to try to repair a broken OS from the OS itself if you don't know exactly why it failed.

                    There are several reasons for this; if the problem was caused by a run away process that eats the data, it may continue to do so if booted. It could be a driver or a filesystem bug that corrupted data, and may continue to do so if booted. It could be a hacker intrusion where the intruder tries to cover his tracks by deleting files, it could be defective hardware, like memory, or a failing hard disc; the less you stress and use a dying disc the better.

                    So for forensic and data integrity reasons, best practise is to boot it from a rescue disc and mount the broken OS media as RO.

                    For Linux, I think any modern Fedora distro should be able boot any Linux installation, so no need to have special rescue media for every computer. There are also dedicated Linux rescue discs with special forensic and data rescue tools.

                    The concept of trying to repair a broken OS from the OS itself with only the crude tools available in /sbin etc, is an obsolete concept from the 1990's.

                    There are many things that we did in the old days that are simply wrong and obsolete these days, simply because the way we do computing have changed so much.
                    Gone are the days where a server was an essentially hand crafted, highly individual machine, glued together with hand made scripts. These day it is all about rapid deployment with automatic tools, if a workstation OS misbehave, you just nuke from orbit and deploy another image or similar.

                    OS file layouts, partition schemas, toolboxes etc, aren't holy dogmas that must be obeyed forever. When they where made, they where made with their contemporary use in mind, so when they no longer fit contemporary use, they should be changed.

                    Any OS that still sticks to obsolete dogmas will just disappear. This is exactly why the industry wants systemd; it fits perfectly into the concept of zero conf, automatic discovery of everything, rapid deployment, high service density, scheme of present day computing.

                    This is progress. I, for one, doesn't yearn for the days of manually configuring xfree config files and calculating CRT modelines etc.

                    Comment


                    • Originally posted by interested View Post
                      It has been considered best practise in the IT industry this century, never to try to repair a broken OS from the OS itself if you don't know exactly why it failed.
                      No. You do want to be able to repair a machine from remote and without taking the entire OS down. I suggest you first start working in system administration of a larger company and get some hands-on experience on how it is done.

                      By the way, not knowing why it fails is only bad. When you do not know then how can you stop it from happening again?! You cannot just leave it to luck.

                      I know your way of thinking is kind of common among Windows users, but when you look at Microsoft then you will see that even they want an OS, which can repair itself and does not require external tools every time it fails.

                      What you are talking about is not best practise. It is only the worst case scenario and you sure do not want to work in an environment where every incident is also the worst case and your solution is to take the hammer each time.

                      Comment

                      Working...
                      X