Announcement

Collapse
No announcement yet.

Users/Developers Threatening Fork Of Debian GNU/Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by danwood76 View Post
    I have also incorrectly compiled kernels before that failed to boot due to missing features, even when using SysVinit no less. All you I did was recompile the kernel with the correct features selected and moved on with my life.
    Disabling SWAP support in the kernel is not incorrect in any way. Disabling IPC or RPC is but I'm not that stupid. Watch your words. In fact most sane people disable SWAP because SWAP is rarely if ever helpful, but it's more than often harmful.

    In fact SWAP must be disabled in many situations because bug 12309 (closed != fixed, I observe this bug on many Android phones) is still unresolved and SWAP makes it 1000 worse.
    Last edited by birdie; 21 October 2014, 09:08 AM.

    Comment


    • #92
      I'd like to hear ONE (1) objective, technical reason why binary logging is bad? Let me pre-empt your non-sequitur answers first:

      1. It's corrupting data. That is not because it's binary, that's because of a bug. The devs' stance on NOTABUG is idiotic, I agree, but that's not objective, technical reason why binlogging is bad.
      2. You need extra tools to view it. Really? You need extra tools to view a text log too. Grep. Cat. Less. More. Whatever you're using
      3. But those tools are part of the base. So is journalctl on systemd/linux systems.
      4. It's against UNIX principles. That's not a objective, technical reason. It's luddite philosophical one.

      So, what am I missing?

      Why is it good? Because it represents a single, unified API for logging and log quering. No more guessing of patterns to extract by date, date range, severity, facility, process, user, SELinux context, etc.... many of those filters are even unavailable to grepping...

      Comment


      • #93
        Originally posted by pal666 View Post
        for servers it is even better. hint: gzipped logs are binary and journald can export to syslog. how many init choices do you have in freebsd ? btw, systemd is not analogous to sysvinit, it is analogous to frebsd basesystem: GET SOME BRAIN!!!1111
        You are such a pleasant person to be conversing with, your manners and attitude are exemplary. /s

        I appreciate you pointing out that journald can export to syslog, that is interesting. It is a pity that you are so aggressive and abrasive - your voice gets lost behind the shouting and insults. Take a chill pill, dude, how can you get so worked up about something like this?

        Comment


        • #94
          Originally posted by birdie View Post
          Disabling SWAP support in the kernel is not incorrect in any way. Disabling IPC or RPC is but I'm not that stupid. Watch your words. In fact most sane people disable SWAP because SWAP is rarely if ever helpful, but it's more than often harmful.

          In fact SWAP must be disabled in many situations because bug 12309 (closed != fixed, I observe this bug on many Android phones) is still unresolved and SWAP makes it 1000 worse.
          You realise that removing the swap line from your /etc/fstab has the effect of disabling swap right?

          Swap is in fact very useful and usually speeds up a lot of operations, especially on systems with RAM constraints.

          You strike me as the kind of person that wants to find something wrong with a new project just so you can argue with people.
          If you don't like systemd that's fine but please stop spreading FUD.

          Comment


          • #95
            Originally posted by Tentacle View Post
            I'd like to hear ONE (1) objective, technical reason why binary logging is bad? Let me pre-empt your non-sequitur answers first:

            1. It's corrupting data. That is not because it's binary, that's because of a bug. The devs' stance on NOTABUG is idiotic, I agree, but that's not objective, technical reason why binlogging is bad.
            2. You need extra tools to view it. Really? You need extra tools to view a text log too. Grep. Cat. Less. More. Whatever you're using
            3. But those tools are part of the base. So is journalctl on systemd/linux systems.
            4. It's against UNIX principles. That's not a objective, technical reason. It's luddite philosophical one.

            So, what am I missing?

            Why is it good? Because it represents a single, unified API for logging and log quering. No more guessing of patterns to extract by date, date range, severity, facility, process, user, SELinux context, etc.... many of those filters are even unavailable to grepping...
            To expand a bit your nice post

            Basically journald exists for one reason and one reason only, journald have to start logging way before userspace is loaded and to do so you need to be clever and try to do it without any dependency on user space as much as possible(this is why r/ng/syslog don't solve that problem, they are way too big and too dependant on user space to start that early), why is binary? well pretty much because is way easier to code and is very fast(but requires way more care for commit transactions)

            Comment


            • #96
              Originally posted by erendorn View Post
              But everything is in the link described.
              Targets, units, timers have dependencies, which lets you build dependency trees.
              When booting the system, graphical.target is selected, so systemd launches the leafs of it's tree in parallel, and then goes up the branches until all conditions for the target are met.
              When socket/dbus/timer-activating a unit, systemd starts the leafs of the unit tree, and then goes up the branches.

              Or is it not how it works?
              yes
              simple answer
              i asked him because he was talking shit more then any other kind of information

              no, it was not in the links
              at least not the "how"

              upstart, in contrast, resolves the dependency tree and then just starts the leaves in parallel
              so where systemd "starts" a "service" and then starts the things it depends upon
              upstart resolves everything then starts everything that has dependencies met at once

              id say upstart does it a better way
              although the difference is negligible in the end (unless a dependency fails)


              anyway
              people talking about "kernel features being used for the good of linux" should read about proc events (google proc connector), a way of getting the information systemd uses cgroups for (tracking processes) without locking cgroups to an init

              there is a plethora of similar situations related to non-boot things that the so called init does (but don't have to have anything to do with it)
              i always said linux just needs a gui for all those things, even way before all this BS came to be

              Comment


              • #97
                Originally posted by Tentacle View Post
                2. You need extra tools to view it. Really? You need extra tools to view a text log too. Grep. Cat. Less. More. Whatever you're using
                3. But those tools are part of the base. So is journalctl on systemd/linux systems.
                Why is it good? Because it represents a single, unified API for logging and log quering. No more guessing of patterns to extract by date, date range, severity, facility, process, user, SELinux context, etc.... many of those filters are even unavailable to grepping...
                The way I look at it, I typically need to look at logs when things are going wrong. Yes, when I am having issues with postfix or NGINX I don't really mind whatever tools to use, grep or any kind of variant is good for quick work and I am sure that whatever the systemd people built is very capable and suitable. For more serious stuff I will be checking it out with my elasticsearch/logstash/kibana stack. But when the chips are down, I want simple tools that are known to work, and have been working for many, many years - crazy, I know, but I also don't use btrfs in production, which probably makes me a Luddite. The point being that, yes you need tools to view a textfile, but there are a lot more tools to play with textfiles then there are tools to play with whatever binary format systemd is peddling. Corruption - whatever. I guess if a binary file is corrupted you are in a lot more trouble, most text tools can cope (sometimes badly, but still) with some corruption.

                I can personally see very little benefit in anything over cat and grep in these situations. Typically it would mean clients yelling at us over the phone, and the last thing I would need at that point is to have some untested toolstack getting all flaky and then bugging out on me. Give me simple, tried, and tested at those times. Systemd might be great for desktops: I don't care, I gave up using Linux desktops a long time ago, I actually want to get work done with my system instead of working on my system.

                In any case, this thread is much more two camps shouting about who is right and who is wrong, which is frankly juvenile and immature. It will be a long cold day in hell before I take anything seriously coming from a guy that is yelling "get some brains" at people wanting to have a civil discussion about something. What I suspect the guys thinking of forking Debian are about is choice. What works for you and makes you all happy might not work for me. Why is everybody in here yelling and shouting that this is wrong? Linux has *always* been about choice. Removing this choice to force people on using something they are clearly unhappy with is annoying. If my choices are removed from me, as a sysadmin and business owner I will simply have to re-evaluate my options, and see whatever tools best meet my needs. If the big distro's remove themselves off my options list because of the changes they make to core systems, well, I'll just have to go and spend my money somewhere else. No need for all these dramatics...

                Comment


                • #98
                  Originally posted by bearded_linux_admin View Post
                  Corruption - whatever. I guess if a binary file is corrupted you are in a lot more trouble, most text tools can cope (sometimes badly, but still) with some corruption.
                  just pass it through strings to get rid of any non-printable characters

                  Comment


                  • #99
                    Originally posted by Tentacle View Post
                    I'd like to hear ONE (1) objective, technical reason why binary logging is bad? Let me pre-empt your non-sequitur answers first:

                    1. It's corrupting data. That is not because it's binary, that's because of a bug. The devs' stance on NOTABUG is idiotic, I agree, but that's not objective, technical reason why binlogging is bad.
                    2. You need extra tools to view it. Really? You need extra tools to view a text log too. Grep. Cat. Less. More. Whatever you're using
                    3. But those tools are part of the base. So is journalctl on systemd/linux systems.
                    4. It's against UNIX principles. That's not a objective, technical reason. It's luddite philosophical one.

                    So, what am I missing?

                    Why is it good? Because it represents a single, unified API for logging and log quering. No more guessing of patterns to extract by date, date range, severity, facility, process, user, SELinux context, etc.... many of those filters are even unavailable to grepping...
                    This one I had to reply to.

                    1. Depending on how the log is serialised, you can lose one message, the whole log or just part of a message in case it is binary (damaged header groups or metadata). If the log is plaintext, you lose one character.
                    2. Every and I mean EVERY operating system out there has a text editor. In a fix you can move a file and read it anywhere. Guess what happens if you try that with a binary log.
                    3. see point 2. if you narrow your view only to linux then ok. but even there you have to meet specific requirements to read the binary logs.
                    4. well the UNIX principles are proven by decades of a rock solid operating system. have a look at how well windows did and how it does now when it started adopting them.

                    In a nutshell, you are missing a lot. You have a very narrow field of view.

                    From my own experience:

                    I have (still am in some cases) worked with Linux, Tru64, AIX, Windows, HP-UX and some other obscure systems. While binary logs are nice when they work (Windows, Tru64) they are a pain to read and learn. For text logs you need to know the same tools everywhere (grep, cat, awk, sed ...) and learn the log format as you go (this even applies to SAP and Oracle). In case of binary logs, you have to know a different formating/viewing tool for each platform as well as the log format.

                    I leave it up to you to consider which situation is easier on the sysadmin.

                    Comment


                    • Originally posted by gens View Post
                      just pass it through strings to get rid of any non-printable characters
                      yep, I know. I just don't want the hassle, and I see very little benefit to me. Like I said, it might be great for people with desktops/laptops whatever - I don't see any tangible benefits for my server farms, and I see a lot of risk, as well as a lot of changes to accommodate such a major change. In short, what's in it for me?

                      Comment

                      Working...
                      X