Announcement

Collapse
No announcement yet.

Fedora, Red Hat Working On "Project Lumberjack"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Fedora, Red Hat Working On "Project Lumberjack"

    Phoronix: Fedora, Red Hat Working On "Project Lumberjack"

    There's new Fedora-hosted work going on: Project Lumberjack. This initiative is about improving system logging on Linux...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Looks like bloat:

    - More complex applications. Most probably a new shared library to help them.
    - People will need to install and maintain new purpose-specific tools and learn how to use them, while the time-honoured generic ones are left aside.
    - Even more complex userspace daemons (and probably their legacy compatibility mechanisms) enter the fray.

    I'm sure a lot of people need structured logging on their servers, but on my PC I'll pass.

    Comment


    • #3
      Yeah, there is never any reason to know what is going on with your computer. Especially if it's a PC.

      Also a hundreds different programs doing the same things, but differently with their own unique libraries and their own unique code, is vastly more lightweight and easier to deal with then if they just used a shared library that did everything correctly from the outset.

      Comment


      • #4
        I agree with drag 100%!

        No, seriously, shared libraries are good if they're written and designed well.

        (edit: in case anyone missed it, I do actually agree with drag...fsck'ing double sarcasm )
        Last edited by Nobu; 01 March 2012, 11:58 PM.

        Comment


        • #5
          The thing is, its hard to exploit logs when you have 100000 machines. (or even 10 in fact), because its just a bunch of non-organized text.

          It doesn't mean CEE is perfect and only attempting to use it will tell. I personally dislike the bloat as well. But, it does make sense.

          I just hope they don't overengineer it.

          Comment


          • #6
            nice

            sounds nice.

            i'm quite sick of desktop distributions dumping longs in inconsistent useless piles. either do it right or don't do it at all and stop screwing my storage devices for nothing, damn it!
            and no, even while being Gentoo, Arch and Sabayon user, i don't enjoy tinkering with stuff that should be well-prepared out-of-the-box, fuck that shit.

            Comment


            • #7
              Originally posted by drag View Post
              Yeah, there is never any reason to know what is going on with your computer. Especially if it's a PC.
              Textbook example of straw man. You can perfectly do it today, with unstructured logging. That's what has been used in UNIX servers since the 70s and apparently they managed to know what was going on their computers.
              To make a more modern example, Android added a new logging subsystem to the linux kernel, to handle their sophisticated logging needs. Clean room, no legacy baggage. Guess what they chose? Unstructured logging.

              Originally posted by drag View Post
              Also a hundreds different programs doing the same things, but differently with their own unique libraries and their own unique code, is vastly more lightweight and easier to deal with then if they just used a shared library that did everything correctly from the outset.
              All hundreds of different programs are already doing their thing with a single interface thing called syslog. Or write(2) if you will. The thing that is meant to replace the old and working interface is going to be a system library, that all programs will have to link to, implementing a complex XML-based multi-standard (http://cee.mitre.org/docs/overview.html, http://cee.mitre.org/docs/profiles.html, http://cee.mitre.org/docs/cls.html, http://cee.mitre.org/docs/clt.html). Inevitably this library is going to evolve and we'll have multiple binary-incompatible versions of the same. Which is good for normal libraries but less so for system ones. Yet another barrier to entry limiting the flexibility of Linux's userspace - that's why I think it should be a server-only thing.

              And do you like XML? I find it awful because it manages to be hard to parse and edit for both humans and computers. It's slow. It's verbose. It's fragile. It's unreadable. It can't be parsed with standard UNIX tools such as grep and sed. The specific tools that handle it are powerful, but computationally intensive and hard to use. Examples? See how convenient it is to change the fontconfig configuration file (http://linux.die.net/man/5/fonts-conf), or in general to handle one of those things that appeared during the XML fad of the early 2000s. And the XSLT language? In all honesty, even if you're a master of it, would you say that you're able to solve problems faster by writing an XSLT program rather than a Perl or Python one? XML is excellent as a lingua franca for data, and as such it's a necessary evil, but it's a pain to fight with in everyday work.

              Comment


              • #8
                bad logging can be a major problem for sysadmins

                I have worked as a sysadmin for quite a few years and logging has been a major pain for me, in that developers all hack up their own logging code, usually as a last minute job with random formatting and zero consideration for consistency within their own work, let alone colluding with others.

                at one job, a large ecommerce company, we had a hell of a job trying to work out problems because the developers logged things in random ways in different modules, so if there was a bug or bad data propagating through the code execution you had no idea which error/debug output was relevant to that data and what was some other web client's execution thread. Half the time trying to understand what went wrong was educated guesswork and hoping that the web server was relatively idle at the time!


                so I can only say PLEASE consider how other people will read the logs you write. Look at how syslog does it at the very least, and ensure that you log formats are consistent and parsable by software... e.g. $DATE:$TIME:$PID:$TID:[modulename][class.function]:$SEVERITY:Text

                (PID = process ID, TID = thread ID).

                Comment


                • #9
                  Originally posted by peppepz View Post
                  Textbook example of straw man. You can perfectly do it today, with unstructured logging. That's what has been used in UNIX servers since the 70s and apparently they managed to know what was going on their computers.
                  By that logic, everything about computers was said and done by 1980 and nothing should be changed any longer. Sorry, but I'm not buying that.

                  Originally posted by peppepz View Post
                  All hundreds of different programs are already doing their thing with a single interface thing called syslog. Or write(2) if you will. The thing that is meant to replace the old and working interface is going to be a system library, that all programs will have to link to, implementing a complex XML-based multi-standard (http://cee.mitre.org/docs/overview.html, http://cee.mitre.org/docs/profiles.html, http://cee.mitre.org/docs/cls.html, http://cee.mitre.org/docs/clt.html). Inevitably this library is going to evolve and we'll have multiple binary-incompatible versions of the same. Which is good for normal libraries but less so for system ones. Yet another barrier to entry limiting the flexibility of Linux's userspace - that's why I think it should be a server-only thing.
                  This is just pure FUD. There's no reason to believe that such a library will change it ABI any more often than, say, glibc does.

                  Comment

                  Working...
                  X