Announcement

Collapse
No announcement yet.

Lennart Poettering On The Open-Source Community: A Sick Place To Be In

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by gens View Post
    so..
    cgroups, as in just the process grouping part of them, are light
    as soon as you add cpu/io/net limiting, they are no longer that light
    In the gaming scenario, you are limiting the background activity by imposing CPU/IO limits, not what happening in the Wayland window that has focus.
    "nice" and "CFS" are rather irrelevant in such a scenario, since you want hard CPU/memory/IO limits for everything in the background, not just giving them lower priority.

    Anyway, that is future development. I am sure that Lennart Poettering will get attacked at that time too, for daring to improve Linux and systemd.

    Comment


    • Originally posted by interested View Post
      In the gaming scenario, you are limiting the background activity by imposing CPU/IO limits, not what happening in the Wayland window that has focus.
      "nice" and "CFS" are rather irrelevant in such a scenario, since you want hard CPU/memory/IO limits for everything in the background, not just giving them lower priority.

      Anyway, that is future development. I am sure that Lennart Poettering will get attacked at that time too, for daring to improve Linux and systemd.
      no i do not want to
      in a proper X desktop environment all user applications are started with a nice set higher then X and the WM, as those things are important for responsiveness
      in the theoretical wayland scenario the WM can set nice itself (it would be the only userspace process that can even know that kinds of things about windows)
      the scenario that you described has nothing to do with systemd

      nice and CFS are irrelevant ?
      i spent half an hour of my life sharing knowledge about how CFS does process scheduling properly and how it compares to limiting using cgroups
      i gave you the mathematics that represent hard facts and even explained them a bit
      and all you can say its irrelevant ?
      just read the god damn text and if you have questions, ask them
      but don't just make Lennart appear a victim
      (he didn't even make cgroups..)


      and to think that i was even good to cgroup arbitrating...
      an arbitrator would have to sample processor/io usage data at some relatively low rate so it wouldn't be able to be nearly as precise as the kernels internal scheduler




      bonus: a fun fact i found out
      JACK development was one of the things that lead to latency optimizations in the whole linux kernel
      so we can thank JACK devs, ftrace dev and the RT crowd that we have this low latencies in the kernel (and probably some other people)

      Comment


      • Originally posted by gens View Post
        nice
        i am in fact interested in the result, since, due to the VIA envy chip, your configuration is the best possible for PA
        i would also like to know if there is a difference in cpu usage when playing a 44.1kHz sound compared to a 48kHz sound, as that would show if PA treats your card properly
        It is actually default configuration from Fedora Workstation 21 Alpha. My system uses an AMD Phenmon II X4 940. Do you have a sample 48kHz sound.


        since your card supports resampling, i would also be interested in what is the difference in the cpu usage of the application itself when using the alsa-PA plugin vs raw
        Code:
        pcm.!default{
        type hw
        card 0
        }
        ctl.!default{
        type hw
        card 0
        }
        this goes into $HOME/.asoundrc, it tells libalsa to use the sound card directly
        on any other sound card this would cause problems when trying to play from 2 or more sound sources, but not on your
        I had to create .asoundrc because it is unavailable on $HOME folder. So far, I had a failure with that setting.


        maybe i am asking too much
        Yes, I wonder how you did not participate to PulseAudio and propose your suggestion.


        note that this has, for the most part, nothing to do with latency
        to properly measure latency would require an M-M 3.5mm cable connecting the cards output to input (and a resistor on it)
        as described http://apps.linuxaudio.org/wiki/jack_latency_tests
        I don't have such equipment and have little interest on spending on them.

        Comment


        • Originally posted by gens View Post
          no i do not want to
          in a proper X desktop environment all user applications are started with a nice set higher then X and the WM, as those things are important for responsiveness
          in the theoretical wayland scenario the WM can set nice itself (it would be the only userspace process that can even know that kinds of things about windows)
          the scenario that you described has nothing to do with systemd

          nice and CFS are irrelevant ?
          i spent half an hour of my life sharing knowledge about how CFS does process scheduling properly and how it compares to limiting using cgroups
          i gave you the mathematics that represent hard facts and even explained them a bit
          and all you can say its irrelevant ?
          just read the god damn text and if you have questions, ask them
          but don't just make Lennart appear a victim
          (he didn't even make cgroups..)


          and to think that i was even good to cgroup arbitrating...
          an arbitrator would have to sample processor/io usage data at some relatively low rate so it wouldn't be able to be nearly as precise as the kernels internal scheduler
          This whole comparison between cgroups and nice is pointless. Cgroups does resource partitioning. Nice works on isolated processes. The two concepts do overlap to some degree but are otherwise orthogonal. Saying ``We don't need cgroups because there's nice'' is like saying that we don't need any sort of traffic regulation because each individual car has brakes.

          Comment


          • Originally posted by kringel View Post
            I'm not sure about the whole open-source community, but I think that rather Red Hat is a sick place:
            http://igurublog.wordpress.com/2014/...cts-your-life/ (second half details Red Hat?s involvement in Linux)
            From The Sporkbox Blog, a review of the dangers of software evangelism and how it applies to the current situation with systemd adoption, with some devel mailing list quotes. In May 2011 Lennart Po…

            Bringing some links buried in comments below to the top, I think these critiques of systemd’s integration and maintenance deserve some review. First, kernel developer Theodore Ts’o, the…

            http://igurublog.wordpress.com/2014/...ed-by-the-nsa/ (not litterally "owned", but they got pwned heavily not only in their random number catastrophe)

            And I love this quote from the last article:

            And check out the video links in this thread:
            linsux.org is your first and best source for all of the information you’re looking for. From general topics to more of what you would expect to find here, linsux.org has it all. We hope you find what you are searching for!


            It shows how he deals with the community. It's a talk from a German Chaos Computer Club conference. If I would have been in the audience (no matter where) I would have ran to the stage and would have punched Poettering in his face. Twice. If you think that I would be an aggressive person or something, you look at this short section from this video: http://www.youtube.com/watch?v=_ERAXJj142o#t=3225s
            I just want to say thanks for the great reading! Rarely I find something to be so precisely on the same wavelength with my own thoughts. I didn't read IG's blog before but when he's talking about Red Hat essentially controlling every core component in Linux it's exactly what me and other reasonable people have been saying on these boards for months.

            Once again, the problem is that nobody is willing to listen. Systemd fanboys don't want to listen because they're either paid corporate shills or simply too dumb to understand what it's all about. Major distro maintainers are bought or otherwise "persuaded" to do what RH (and the forces behind RH) want them to do. Independent FOSS developers eventually give up one by one as they see the most promising projects either cannibalized by systemd or continuously sidelined and ignored.

            We are the FOSS community and we must realize that this is our war. Linus won't fight it for us. Stallman won't fight it for us. GPL won't magically win it by itself. (GPL doesn't compel one to write good code. GPL doesn't prevent your crypto from being sabotaged. GPL doesn't prevent corruption or make developers immune to intimidation.) We need to start with boycotting systemd and other Red Hat projects. We need to support independent software projects and distros. And we need to stop blaming Lennart Poettering for our own faults. He wouldn't be such a powerful figure in the Linux world today if we didn't - willingly and blidnly - invest that kind of power in him.
            Last edited by prodigy_; 12 October 2014, 03:44 PM.

            Comment


            • Originally posted by prodigy_ View Post
              [...]Once again, the problem is that nobody is willing to listen. Systemd fanboys don't want to listen because they're either paid corporate shills or simply too dumb to understand what it's all about.[...]
              So everybody with a different view on that subject is a fanboy and too dumb to understand what's going on? Way to go! Now they will surely listen and change their view, because people love when they are being told that they are fanboys and dumb! (sarcasm)

              If you really have arguments, present them and stop attacking other people. And if your only argument is "it could be done otherwise" then do it or find people that can do it and promote these solutions instead of attacking existing working solutions.

              Comment


              • Originally posted by droste View Post
                So everybody with a different view on that subject is a fanboy and too dumb to understand what's going on? Way to go! Now they will surely listen and change their view, because people love when they are being told that they are fanboys and dumb! (sarcasm)

                If you really have arguments, present them and stop attacking other people. And if your only argument is "it could be done otherwise" then do it or find people that can do it and promote these solutions instead of attacking existing working solutions.
                Don't try. Just look at the posting history of this one and you will realize that never one intelligent thought came from this "prodigy".

                Comment


                • TBH, you can do the same for some of the systemd fanboys. They give an entirely new meaning to "rose-colored glasses".

                  Comment


                  • Originally posted by gens View Post
                    no i do not want to
                    in a proper X desktop environment all user applications are started with a nice set higher then X and the WM, as those things are important for responsiveness
                    in the theoretical wayland scenario the WM can set nice itself (it would be the only userspace process that can even know that kinds of things about windows)
                    Giving "soft" scheduling priorities by using "nice", isn't the same as putting on hard limits systemd does with cgroups.

                    Originally posted by gens View Post
                    the scenario that you described has nothing to do with systemd
                    Yes it does since systemd is the cgroups manager on systemd machines. So systemd will be involved in making automatic and dynamic resource allocation, where priorities are based on what task the computer is performing at the moment.


                    Originally posted by gens View Post
                    nice and CFS are irrelevant ?
                    i spent half an hour of my life sharing knowledge about how CFS does process scheduling properly and how it compares to limiting using cgroups
                    i gave you the mathematics that represent hard facts and even explained them a bit
                    and all you can say its irrelevant ?
                    just read the god damn text and if you have questions, ask them
                    but don't just make Lennart appear a victim
                    (he didn't even make cgroups..)
                    First, there is nothing new in what you wrote. Secondly, they are irrelevant to the described scenario. It is great the kernel schedulers like CFS exist (there are many more), but it doesn't solve the problem as such. Again, you are confusing scheduling priorities with hard limits like what cgroups does.

                    Comment


                    • Originally posted by interested View Post
                      Giving "soft" scheduling priorities by using "nice", isn't the same as putting on hard limits systemd does with cgroups.

                      ...

                      First, there is nothing new in what you wrote. Secondly, they are irrelevant to the described scenario. It is great the kernel schedulers like CFS exist (there are many more), but it doesn't solve the problem as such. Again, you are confusing scheduling priorities with hard limits like what cgroups does.
                      yes, there is nothing new
                      what i was doing with that piece of text was comparing doing something with nice and doing something with cgroups
                      as i nicely calculated was that giving a nice of 20 to the "background" process gives exactly the same effect as putting it in a cgroup and limiting it to 1.14%
                      even better since it does it by a ratio of current cpu usage, not by a rolling average

                      limiting something that hard is not something a cgroup arbiter would do
                      it would probably have a lower boundary in the range of something like 5% - 10%

                      another thing, even older then that, is in the theory of how "multitasking" is implemented in computers
                      there is, ofc, no way to run more then one process at a time on a single cpu core
                      that is the whole reason in needing a scheduler in the first place
                      a scheduler runs one process, then it stops it and runs another process and so on
                      simple so far
                      a preemptive scheduler does this by using a mechanism built in the cpu
                      namely a mechanism called "Programmable Interval Timer", that is programmed to send an interrupt after (or at) some time
                      ofc, that's not that new either
                      but it makes it possible to get fine scheduling granularity without much overhead, since it is hardware assisted
                      and a clever algorithm makes sure that cpu time is fairly distributed and that the maximum latency does not go over some threshold

                      cgroup arbiter (not manager) has to pull cpu usage data from the kernel at a regular interval (since it is a user space process)
                      as to make it even possible for it to make a decision (dynamically)
                      (or a cgroup manager can just limit the process when it is started, that systemd does according to it's .service files)

                      so the limiting, while dynamic, would lag behind real usage
                      causing overall lower throughput in a non ideal scenario (that almost all are)
                      meaning that the background process would take longer to finish its job and the foreground process could potentially get screwed over

                      leading to the fact that CFS has a 1 nanosecond granularity (by default)
                      if a user space process (like htop or a cgroup arbiter) would pull cpu usage data for every process every nanosecond,
                      it would itself use all the cpu time and still fall behind
                      and that's without the fact that syscall has a small delay in the first place


                      but ye
                      you are, ofc, free to have a (small) overhead from cgroups and cgroup arbiter when you are playing your game
                      i'm sure you won't notice it

                      and yes, it has to do with systemd, if you use systemd
                      because you can't use cgroups on a systemd computer without asking systemd to do it for you


                      if you really want a good use scenario for cgroup resource limiting, it is clearly written in the CFS and cgroups documentations
                      so i assume you didn't read the documentations or that you didn't understand them
                      ceage hinted at it, al be it very vaguely
                      (hint: it doesn't have any use in the most common, single user desktop use case)
                      (another hint: it's one of the reasons for making cgroups)

                      anyway, i'm done with this bullshit
                      it is clear that it has more to do with belief then anything real,
                      and i'm agnostic

                      Comment

                      Working...
                      X