Announcement

Collapse
No announcement yet.

Lennart Poettering On The Open-Source Community: A Sick Place To Be In

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by interested View Post
    AFAIK, udev didn't break userspace when integrated into systemd. That people would have to patch it in order to use it independently is exactly what forking is all about.

    The udev-systemd integration didn't break compatibility for any systemd distro either. That people want to use some of systemd's code without using systemd as init is their problem.

    Let me stress that; people who don't want to use systemd, have the sole and complete responsibility of making their Linux distro work, including either to develop or fork any necessary code.

    Yes, it would be convenient for the non-systemd users if the systemd developers made all their work for them, but that is an unreasonable requirement by any standard.
    Seriously, you're saying it didn't break anything, because you should just pre-emptively patch it? You had a common use case that worked before but didn't work after? How is that not broken?

    You're argument might hold merit if udev was originally a systemd project, but it wasn't. It was an independent project that didnt' require a specific init system. It kept the same name but changed it's use case with very little warning.

    Comment


    • Originally posted by WorBlux View Post
      Seriously, you're saying it didn't break anything, because you should just pre-emptively patch it? You had a common use case that worked before but didn't work after? How is that not broken?
      It was only a problem for those distros who didn't want to follow upstreams desire to use systemd.

      It is like when LXDE changed toolkit from GTK+ to Qt. I am sure that some had a LXDE widget they refused to convert into Qt for whatever reason so it no longer works, but that is their problem, not the LXDE's problem.

      In short, open source programs changes all the time, and people can either follow in the new direction, fork, or use something completely different. What they can't do is claim ownership of code they didn't own, and demand that the developers should never change it, because they have a "user case" and disagree with the direction the project is taken.

      You can't demand that LXDE never may change to using Qt. And you can't demand that udev never should be integrated with systemd.

      If people disagree with the direction that udev is taken, they can just fork it or use something else. It is their choice.


      Originally posted by WorBlux View Post
      You're argument might hold merit if udev was originally a systemd project, but it wasn't. It was an independent project that didnt' require a specific init system. It kept the same name but changed it's use case with very little warning.
      Not only did they warned about the change months in advance, they officially supported building and using udev outside systemd for a long time:


      udev merged with systemd because it was a good idea since udev and systemd have very common goals. udev didn't belong to you, nor anybody else than the udev developers, so that it once was an independent project is completely irrelevant.

      In short, people have no right to prevent an upstream projects from developing.

      If people want a distro without using systemd, they have the sole responsibility of developing and maintaining all the necessary code for doing so. That includes udev code or code with similar functionality.

      Comment


      • Originally posted by gens View Post
        and again;
        YOU CAN NOT LIMIT FIREFOX MEMORY USAGE
        it either uses a lot of memory, or it dies
        simple as that
        That ain't so. Hell, would our systems suck if they were *that* fragile. I mean, we're not talking about MS-DOS here.

        To begin with, each and every program is inherently limited in terms of memory usage by what the hardware provides, so there's no serious question about whether memory usage *can* be limited. The limit is a fact. Dealing with that limit is the whole point of memory management.

        On Linux, a lot of factors determine what happens under memory pressure, not the least of which are the programs themselves. They are responsible for requesting memory from the OS in the first place and they are free to handle failed requests gracefully. On Linux (and other Unices) those syscalls are named brk(), sbrk() and mmap(), though most programmers will usually call some library function, like glibc's malloc(), which will then use a syscall internally. The kernel OOM killer kicks in as the ultima ratio, but there's not even a guarantee about which process will get killed first.

        As far as cgroups are concerned, processes in a memory limited cgroup can be put in a wait queue until memory can be freed up. It's not hard to come up with use cases for this. Several of them are documented in the Linux source tree under `Documentation/cgroups/*'.

        Comment


        • Originally posted by ceage View Post
          That ain't so. Hell, would our systems suck if they were *that* fragile. I mean, we're not talking about MS-DOS here.
          what a program does when it gets a ENOMEM when calling mmap() is up to the program itself
          firefox is not a simple program to recover from that error easily

          i suggest putting it into a cgroup with a memory limit that you know it will grow out of (it's 980MB in size now for me, out of 4GB total)

          Comment


          • Originally posted by gens View Post
            jack with default settings uses around 0.5% cpu
            sox then uses ~1.4%
            since the default sampling rate for jack is 48kHz, changing it to 44.1kHz lowered sox cpu usage to 0.5%
            Code:
            bash-4.2# cat /proc/asound/card0/pcm0p/sub0/hw_params
            access: MMAP_INTERLEAVED
            format: S32_LE
            subformat: STD
            channels: 8
            rate: 48000 (48000/1)
            period_size: 1024
            buffer_size: 2048
            missed a detail
            changing JACK's format to 16bit lowered the cpu usage to somewhere between 0% and 0.5% (about ~0.3% id say)
            16bit signed integer is what sox and PA use
            changing the number of periods to 4, to be same as PA, probably lowered it a bit more (hard to notice in htop)
            i need a slower cpu..

            Comment


            • Originally posted by ceage View Post
              As far as cgroups are concerned, processes in a memory limited cgroup can be put in a wait queue until memory can be freed up. It's not hard to come up with use cases for this. Several of them are documented in the Linux source tree under `Documentation/cgroups/*'.
              hmm...
              nice
              ppl who made them deserve a cookie

              what if it was the only program in the group ?
              wouldn't it then wait forever
              it is what we were talking about, one process per group

              Comment


              • Originally posted by gens View Post
                what a program does when it gets a ENOMEM when calling mmap() is up to the program itself
                firefox is not a simple program to recover from that error easily

                i suggest putting it into a cgroup with a memory limit that you know it will grow out of (it's 980MB in size now for me, out of 4GB total)
                Firefox use of cache is proportional to the memory you have (all browsers do, and any application that uses huge amounts of cache should). I don't know how it behaves if you reduces its total available memory at run time, but if it's limited from the start, it will simply use less cache and as such, less memory (to the detriment of performance).

                Comment


                • Originally posted by gens View Post
                  hmm...
                  nice
                  ppl who made them deserve a cookie

                  what if it was the only program in the group ?
                  wouldn't it then wait forever
                  it is what we were talking about, one process per group
                  I'd say this is the sort of situation that a) should almost never happen in practice and b) should be handled by an admin manually, given that the OOM killer was likely disabled on purpose for that cgroup. So, apart from re-enabling that, an admin could temporarily raise the memory limit for the cgroup in question. Also, programs may be written to make use of cgroup notifications generated by the kernel so as to anticipate low memory situations and (hopefully) handle them nicely.

                  Comment

                  Working...
                  X