Announcement

Collapse
No announcement yet.

Systemd 227 To Gain Crash Automatic Reboot Option, New Network Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by _ck_ View Post
    Why stop there, just make systemDos and get it overwith so we can go back to the regular linux we wanted in the first place.
    nobody is stopping you from using your slackware 1.0

    Comment


    • #12
      Originally posted by cjcox View Post

      Slight correction. Feel free to start a new project OS as you see fit. Let the primary drivers of most current Linux development do the same.
      why in the hell linux developers will start new os? are you on drugs?

      Comment


      • #13
        Originally posted by milkylainen View Post
        When what crashes? Systemd? Thats stupid. Is it so bloated that it hangs the machine and drags it into a reboot? No wonder people hate systemd.
        Kernel? Already reboots if configured so? If kernel takes an undetected hang? Not a chance, systemd won't detect it.
        This "feature" looks like a brainfart to me.
        when systemd has kernel from nineties. no wonder only clueless morons hate systemd
        Originally posted by milkylainen View Post
        How about hooking systemd to a soft watchdog in the kernel which then uses real watchdogs if available.
        Soft registry of identification id's in the watchdog with timeout deadlines to reboot.
        If a process misses deadline the kernel reboots. If the kernel hangs, the deadlines are missed in hardware and then hardware watchdog reboots.
        This way one can do soft realtime detection of hangs of all sorts.
        how about descending from your tree? systemd has gained watchdog support dozens of releases ago.

        Comment


        • #14
          Originally posted by pal666 View Post
          ...
          please stop posting
          you are lowering the average intelligence quotient of the internet

          Comment


          • #15
            @Dorsai!
            Yes, I have seen them. They're still not a sane way to do things. There's probably some sort of Dunning Kruger effect (or a similar cognitive bias) working here, keeping people from realising just how bad those shell scripts are. And what a horrible idea it is to use shell scripts for anything sensitive anyways.

            The fact that even Microsoft Windows is doing it in a more sane fashion should be a cause for concern, not something to embrace.

            Those who do not understand Unix are condemned to reinvent it, poorly. Using shell scripts and all sorts of insane cruft.

            @gens
            You're a prime example of cognitive bias.
            If anyone is lowering the average IQ of the web, it's you and your sickeningly Conservative rantings.

            I fully understand you're very proud of your shell scripts and they give you the feeling of being a Real Programmer. However, that's not how things can and should work in general.
            If you're so proud of your shell scripts, you could always come up with your own OS. Or just learn the way professionals design software (separation of concerns is key here).

            In either case you'll, (un)fortunately, have to embrace a real programming language.

            Comment


            • #16
              Originally posted by Dorsai! View Post

              Have you seen modern OpenRC scripts or only debian's legacy SysV init scripts? They are much more readable, elegant and flexible than most systemd unit files. Starting services is just not something a descriptive language does well, as even the systemd devs realize by now. That is the reason why several services need a lot of unique binary code inside systemd to function. In those situations a shell script would have been a lot more elegant.
              You're talking out of your ass.

              Gentoo systemd NFS daemon service file:
              Code:
              [Unit]
              Description=NFS server
              After=rpcbind.service
              Requires=rpcbind.service
              
              [Service]
              Type=oneshot
              ExecStart=/usr/sbin/rpc.nfsd 8
              ExecStartPost=/usr/sbin/exportfs -a
              ExecStop=/usr/sbin/rpc.nfsd 0
              ExecStopPost=/usr/sbin/exportfs -a -u
              RemainAfterExit=yes
              
              [Install]
              WantedBy=multi-user.target
              Gentoo OpenRC NFS daemon init script:
              Code:
              #!/sbin/runscript
              # Copyright 1999-2014 Gentoo Foundation
              # Distributed under the terms of the GNU General Public License v2
              # $Id$
              
              extra_started_commands="reload"
              
              # This variable is used for controlling whether or not to run exportfs -ua;
              # see stop() for more information
              restarting=no
              
              # The binary locations
              exportfs=/usr/sbin/exportfs
                mountd=/usr/sbin/rpc.mountd
                  nfsd=/usr/sbin/rpc.nfsd
              smnotify=/usr/sbin/sm-notify
              
              depend() {
                      local myneed=""
                      # XXX: no way to detect NFSv4 is desired and so need rpc.idmapd
                      myneed="${myneed} $(
                              awk '!/^[[:space:]]*#/ {
                                      # clear the path to avoid spurious matches
                                      $1 = "";
                                      if ($0 ~ /[(][^)]*sec=(krb|spkm)[^)]*[)]/) {
                                              print "rpc.svcgssd"
                                              exit 0
                                      }
                              }' /etc/exports /etc/exports.d/*.exports 2>/dev/null
                      )"
                      config /etc/exports /etc/exports.d/*.exports
                      need portmap rpc.statd ${myneed} ${NFS_NEEDED_SERVICES}
                      use ypbind net dns rpc.rquotad rpc.idmapd rpc.svcgssd
                      after quota
              }
              
              mkdir_nfsdirs() {
                      local d
                      for d in v4recovery v4root ; do
                              d="/var/lib/nfs/${d}"
                              [ ! -d "${d}" ] && mkdir -p "${d}"
                      done
              }
              
              waitfor_exportfs() {
                      local pid=$1
                      ( sleep ${EXPORTFS_TIMEOUT:-30}; kill -9 ${pid} 2>/dev/null ) &
                      wait $1
              }
              
              mount_nfsd() {
                      if [ -e /proc/modules ] ; then
                              # Make sure nfs support is loaded in the kernel #64709
                              if ! grep -qs nfsd /proc/filesystems ; then
                                      modprobe -q nfsd
                              fi
                              # Restart idmapd if needed #220747
                              if grep -qs nfsd /proc/modules ; then
                                      killall -q -HUP rpc.idmapd
                              fi
                      fi
              
                      # This is the new "kernel 2.6 way" to handle the exports file
                      if grep -qs nfsd /proc/filesystems ; then
                              if ! mountinfo -q /proc/fs/nfsd ; then
                                      ebegin "Mounting nfsd filesystem in /proc"
                                      mount -t nfsd -o nodev,noexec,nosuid nfsd /proc/fs/nfsd
                                      eend $?
                              fi
              
                              local o
                              for o in ${OPTS_NFSD} ; do
                                      echo "${o#*=}" > "/proc/fs/nfsd/${o%%=*}"
                              done
                      fi
              }
              
              start_it() {
                      ebegin "Starting NFS $1"
                      shift
                      "$@"
                      eend $?
                      ret=$((ret + $?))
              }
              start() {
                      mount_nfsd
                      mkdir_nfsdirs
              
                      # Exportfs likes to hang if networking isn't working.
                      # If that's the case, then try to kill it so the
                      # bootup process can continue.
                      if grep -qs '^[[:space:]]*/' /etc/exports /etc/exports.d/*.exports ; then
                              ebegin "Exporting NFS directories"
                              ${exportfs} -r &
                              waitfor_exportfs $!
                              eend $?
                      fi
              
                      local ret=0
                      start_it mountd ${mountd} ${OPTS_RPC_MOUNTD}
                      start_it daemon ${nfsd} ${OPTS_RPC_NFSD}
                      [ -x "${smnotify}" ] && start_it smnotify ${smnotify} ${OPTS_SMNOTIFY}
                      return ${ret}
              }
              
              stop() {
                      local ret=0
              
                      ebegin "Stopping NFS mountd"
                      start-stop-daemon --stop --exec ${mountd}
                      eend $?
                      ret=$((ret + $?))
              
                      # nfsd sets its process name to [nfsd] so don't look for $nfsd
                      ebegin "Stopping NFS daemon"
                      start-stop-daemon --stop --name nfsd --user root --signal 2
                      eend $?
                      ret=$((ret + $?))
                      # in case things don't work out ... #228127
                      rpc.nfsd 0
              
                      # When restarting the NFS server, running "exportfs -ua" probably
                      # isn't what the user wants.  Running it causes all entries listed
                      # in xtab to be removed from the kernel export tables, and the
                      # xtab file is cleared. This effectively shuts down all NFS
                      # activity, leaving all clients holding stale NFS filehandles,
                      # *even* when the NFS server has restarted.
                      #
                      # That's what you would want if you were shutting down the NFS
                      # server for good, or for a long period of time, but not when the
                      # NFS server will be running again in short order.  In this case,
                      # then "exportfs -r" will reread the xtab, and all the current
                      # clients will be able to resume NFS activity, *without* needing
                      # to umount/(re)mount the filesystem.
                      if [ "${restarting}" = no -o "${RC_CMD}" = "restart" ] ; then
                              ebegin "Unexporting NFS directories"
                              # Exportfs likes to hang if networking isn't working.
                              # If that's the case, then try to kill it so the
                              # shutdown process can continue.
                              ${exportfs} -ua &
                              waitfor_exportfs $!
                              eend $?
                      fi
              
                      return ${ret}
              }
              
              reload() {
                      # Exportfs likes to hang if networking isn't working.
                      # If that's the case, then try to kill it so the
                      # bootup process can continue.
                      ebegin "Reloading /etc/exports"
                      ${exportfs} -r 1>&2 &
                      waitfor_exportfs $!
                      eend $?
              }
              
              restart() {
                      # See long comment in stop() regarding "restarting" and exportfs -ua
                      restarting=yes
                      svc_stop
                      svc_start
              }

              Comment


              • #17
                FishB8

                Spot on!

                You just know something sucks when it has comments in it explaining things that should be easy and intuitive. Particularly when some of those comments reference previous comments.

                Comment


                • #18
                  Lots of the usual FUD. Let's go down the list...
                  Originally posted by _ck_ View Post
                  Why stop there, just make systemDos and get it overwith so we can go back to the regular linux we wanted in the first place.
                  What "regular Linux" means here is subjective. It's like... your opinion man. There is nothing hard wired into Linux that forces the systemd daemon onto you (otherwise, Linux would have something severely wrong with its design). Hell, there was even sparked interest into an alternative (from an interesting developer associated with OpenSolaris).

                  Originally posted by MoonMoon
                  Feel free to develop Linux as you see fit. Let others do the same. It is that simple.
                  It's not quite that simple. Realistically, no single developer will be able to compete by himself with the likes of systemd or even sysv for that matter. You shouldn't discourage discussion on the matter just because you think systemd is "the" solution. This is how progress is made. That said, some of these replies are pretty ignorant and unproductive... partially including this one.

                  When what crashes? Systemd? Thats stupid. Is it so bloated that it hangs the machine and drags it into a reboot? No wonder people hate systemd.
                  Kernel? Already reboots if configured so? If kernel takes an undetected hang? Not a chance, systemd won't detect it.
                  This "feature" looks like a brainfart to me.

                  How about hooking systemd to a soft watchdog in the kernel which then uses real watchdogs if available.
                  Soft registry of identification id's in the watchdog with timeout deadlines to reboot.
                  If a process misses deadline the kernel reboots. If the kernel hangs, the deadlines are missed in hardware and then hardware watchdog reboots.
                  This way one can do soft realtime detection of hangs of all sorts.
                  This boils down to him thinking he has a clever idea because he read how watchdogs work.

                  systemd supports watchdog hardware. http://0pointer.de/blog/projects/watchdog.html
                  That said, this isn't always useful since not all hardware, in particular desktops, has hardware that acts as a watchdog.
                  https://www.kernel.org/doc/Documenta...tchdog-api.txt <-- Explains what a watchdog is in even simple terms.

                  Sorry. But everything that has "timeout" or "deadline" in it sounds like a bloody horrible idea for this use-case. There should be a saner way to do this.
                  He was right to some extent. Servers use hardware-based watchdogs increasingly. What he didn't understand is that systemd, which he is ignorantly bashing, already supports the use of these mechanisms.

                  Originally posted by Dorsai!
                  Have you seen modern OpenRC scripts or only debian's legacy SysV init scripts? They are much more readable, elegant and flexible than most systemd unit files. Starting services is just not something a descriptive language does well, as even the systemd devs realize by now.
                  I've written a few for the sake of server management. OpenRC scripts still contain pointless boilerplate, tend to be inconsistent with other services, are a pain in the ass to port between different distributions (especially if you're trying to port a script from Ubuntu (start-stop) which is usually the case due popularity), and are just butt ugly. The upside is the easy interface although some even debate that. Unit files tend to have basic human readable descriptions, specific instructions on what should happen when the daemon starts, what should happen when the daemon shuts down, and various settings and modes to handle, in a uniform and clean manner, any corner case you can think of. If they don't provide all the functionality you need and prefer shell scripts anyways, it's your birthday! You can use shell scripts with systemd as well so you can be as descript with your shell language as you wish.

                  That is the reason why several services need a lot of unique binary code inside systemd to function. In those situations a shell script would have been a lot more elegant.
                  I split the quote above because this section of the quote in particular is FUD. You need to explain what functionality or "unique binary code", whatever that means, is being included with systemd for the sole purpose of a single service. I can already state that there is no such "binary code" or functionality, it goes against the general design of systemd. There'd be no point for it. Ironically, despite being claimed that systemd devs always blame others and never fix their code, it now seems to be made claim that they're creating work arounds or unique service code specifically for others. Do you see the self confliction here?
                  Last edited by computerquip; 02 October 2015, 11:23 PM.

                  Comment


                  • #19
                    Originally posted by gens View Post
                    i am imbecile
                    nice.

                    Comment

                    Working...
                    X