Announcement

Collapse
No announcement yet.

Lennart Poettering On The Open-Source Community: A Sick Place To Be In

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ceage
    replied
    Originally posted by gens View Post
    hmm...
    nice
    ppl who made them deserve a cookie

    what if it was the only program in the group ?
    wouldn't it then wait forever
    it is what we were talking about, one process per group
    I'd say this is the sort of situation that a) should almost never happen in practice and b) should be handled by an admin manually, given that the OOM killer was likely disabled on purpose for that cgroup. So, apart from re-enabling that, an admin could temporarily raise the memory limit for the cgroup in question. Also, programs may be written to make use of cgroup notifications generated by the kernel so as to anticipate low memory situations and (hopefully) handle them nicely.

    Leave a comment:


  • erendorn
    replied
    Originally posted by gens View Post
    what a program does when it gets a ENOMEM when calling mmap() is up to the program itself
    firefox is not a simple program to recover from that error easily

    i suggest putting it into a cgroup with a memory limit that you know it will grow out of (it's 980MB in size now for me, out of 4GB total)
    Firefox use of cache is proportional to the memory you have (all browsers do, and any application that uses huge amounts of cache should). I don't know how it behaves if you reduces its total available memory at run time, but if it's limited from the start, it will simply use less cache and as such, less memory (to the detriment of performance).

    Leave a comment:


  • gens
    replied
    Originally posted by ceage View Post
    As far as cgroups are concerned, processes in a memory limited cgroup can be put in a wait queue until memory can be freed up. It's not hard to come up with use cases for this. Several of them are documented in the Linux source tree under `Documentation/cgroups/*'.
    hmm...
    nice
    ppl who made them deserve a cookie

    what if it was the only program in the group ?
    wouldn't it then wait forever
    it is what we were talking about, one process per group

    Leave a comment:


  • gens
    replied
    Originally posted by gens View Post
    jack with default settings uses around 0.5% cpu
    sox then uses ~1.4%
    since the default sampling rate for jack is 48kHz, changing it to 44.1kHz lowered sox cpu usage to 0.5%
    Code:
    bash-4.2# cat /proc/asound/card0/pcm0p/sub0/hw_params
    access: MMAP_INTERLEAVED
    format: S32_LE
    subformat: STD
    channels: 8
    rate: 48000 (48000/1)
    period_size: 1024
    buffer_size: 2048
    missed a detail
    changing JACK's format to 16bit lowered the cpu usage to somewhere between 0% and 0.5% (about ~0.3% id say)
    16bit signed integer is what sox and PA use
    changing the number of periods to 4, to be same as PA, probably lowered it a bit more (hard to notice in htop)
    i need a slower cpu..

    Leave a comment:


  • gens
    replied
    Originally posted by ceage View Post
    That ain't so. Hell, would our systems suck if they were *that* fragile. I mean, we're not talking about MS-DOS here.
    what a program does when it gets a ENOMEM when calling mmap() is up to the program itself
    firefox is not a simple program to recover from that error easily

    i suggest putting it into a cgroup with a memory limit that you know it will grow out of (it's 980MB in size now for me, out of 4GB total)

    Leave a comment:


  • ceage
    replied
    Originally posted by gens View Post
    and again;
    YOU CAN NOT LIMIT FIREFOX MEMORY USAGE
    it either uses a lot of memory, or it dies
    simple as that
    That ain't so. Hell, would our systems suck if they were *that* fragile. I mean, we're not talking about MS-DOS here.

    To begin with, each and every program is inherently limited in terms of memory usage by what the hardware provides, so there's no serious question about whether memory usage *can* be limited. The limit is a fact. Dealing with that limit is the whole point of memory management.

    On Linux, a lot of factors determine what happens under memory pressure, not the least of which are the programs themselves. They are responsible for requesting memory from the OS in the first place and they are free to handle failed requests gracefully. On Linux (and other Unices) those syscalls are named brk(), sbrk() and mmap(), though most programmers will usually call some library function, like glibc's malloc(), which will then use a syscall internally. The kernel OOM killer kicks in as the ultima ratio, but there's not even a guarantee about which process will get killed first.

    As far as cgroups are concerned, processes in a memory limited cgroup can be put in a wait queue until memory can be freed up. It's not hard to come up with use cases for this. Several of them are documented in the Linux source tree under `Documentation/cgroups/*'.

    Leave a comment:


  • interested
    replied
    Originally posted by WorBlux View Post
    Seriously, you're saying it didn't break anything, because you should just pre-emptively patch it? You had a common use case that worked before but didn't work after? How is that not broken?
    It was only a problem for those distros who didn't want to follow upstreams desire to use systemd.

    It is like when LXDE changed toolkit from GTK+ to Qt. I am sure that some had a LXDE widget they refused to convert into Qt for whatever reason so it no longer works, but that is their problem, not the LXDE's problem.

    In short, open source programs changes all the time, and people can either follow in the new direction, fork, or use something completely different. What they can't do is claim ownership of code they didn't own, and demand that the developers should never change it, because they have a "user case" and disagree with the direction the project is taken.

    You can't demand that LXDE never may change to using Qt. And you can't demand that udev never should be integrated with systemd.

    If people disagree with the direction that udev is taken, they can just fork it or use something else. It is their choice.


    Originally posted by WorBlux View Post
    You're argument might hold merit if udev was originally a systemd project, but it wasn't. It was an independent project that didnt' require a specific init system. It kept the same name but changed it's use case with very little warning.
    Not only did they warned about the change months in advance, they officially supported building and using udev outside systemd for a long time:
    https://lwn.net/Articles/490413/

    udev merged with systemd because it was a good idea since udev and systemd have very common goals. udev didn't belong to you, nor anybody else than the udev developers, so that it once was an independent project is completely irrelevant.

    In short, people have no right to prevent an upstream projects from developing.

    If people want a distro without using systemd, they have the sole responsibility of developing and maintaining all the necessary code for doing so. That includes udev code or code with similar functionality.

    Leave a comment:


  • WorBlux
    replied
    Originally posted by interested View Post
    AFAIK, udev didn't break userspace when integrated into systemd. That people would have to patch it in order to use it independently is exactly what forking is all about.

    The udev-systemd integration didn't break compatibility for any systemd distro either. That people want to use some of systemd's code without using systemd as init is their problem.

    Let me stress that; people who don't want to use systemd, have the sole and complete responsibility of making their Linux distro work, including either to develop or fork any necessary code.

    Yes, it would be convenient for the non-systemd users if the systemd developers made all their work for them, but that is an unreasonable requirement by any standard.
    Seriously, you're saying it didn't break anything, because you should just pre-emptively patch it? You had a common use case that worked before but didn't work after? How is that not broken?

    You're argument might hold merit if udev was originally a systemd project, but it wasn't. It was an independent project that didnt' require a specific init system. It kept the same name but changed it's use case with very little warning.

    Leave a comment:


  • gens
    replied
    Originally posted by interested View Post
    To me it looks like you misunderstand how "nice" work. "nice" is all about yielding for higher priorities and thereby putting "soft" limits on the process. But just because a process has a low nice value, doesn't mean it won't hog CPU time if it can.

    Try running "stress" on a fairly idle system. It generates CPU load (should be in most distro repos, or something similar cpuburn?).

    renice the "stress" process to 19 and watch "top". Try renice "stress" to -20 and watch top again. As you can see, "stress" will hog +95-100% cpu time, even if it has the lowest nice priority as possible. This is a good thing in some scenarios, in others, not so much.
    no, i didn't
    and stop being smart

    the kernels scheduler calculates the time it will give to a process
    in advance, every "epoch"
    in a scenario with only 2 processes running a process with a "weight" a hundred times greater then the other will get 99% of the time of the epoh
    SO STOP THE BULLSHIT THINKING

    processes don't have "gears", they don't have a "speed" and they do not have a limit
    except a run time limit, in an epoch
    so if the process with the lower nice yields, the process with the higher nice will get more time
    AND THE PROCESS WITH THE LOWER NICE WON'T EVEN NOTICE IT

    do i have to repeat the same thing 10 times ?

    you go run some tests
    you don't need a "stress" utility, you can use "dd if=/dev/urandom of=/dev/null" (add & and run for every cpu core)



    a great example is me playing dota2
    dota 2 is far from an optimized game (they all are)
    my cpu is about on the balance of running it good

    i played a couple rounds and after i exited, guess what
    turns out i forgot to turn off the lite-coin miner
    it tried to use 100% of all my cores, and i didn't even notice it
    why don't you explain that to me ? (don't)


    and again;
    YOU CAN NOT LIMIT FIREFOX MEMORY USAGE
    it either uses a lot of memory, or it dies
    simple as that

    also there is a scenario where i want to play a youtube video while playing dota, or have voice chat or even a video call on a diff monitor
    i don't want that youtube/chat to stutter just because someone was smart to put it in a fucking cpu limited sandbox

    i told you that i'm done with this bullshit reasoning
    YOU make a test that backs up your case, or at least think a little and stop with the bollocks
    Last edited by gens; 15 October 2014, 10:17 AM.

    Leave a comment:


  • interested
    replied
    Originally posted by WorBlux View Post
    Yes, and it can even auto-unmount if you edit the sudoer's file.
    I think the bigger issue is a lot of programs depend on udev directly (chrome/chromium browser, libatasmart, guvcview, udisks, bluez have hard dependencies, and another dozen or so with optional support (and this is on a system I try to keep fairly minimal for my needs) so it's definitely not an option on the typical desktop
    Well, my point was exactly that mdev wasn't a serious alternative to udev, which you seem to confirm here. Sure, it may be, but it would require a developer community that doesn't seem to exist.


    Originally posted by WorBlux View Post
    Linus at least is really big on not breaking userspace.

    Not breaking other dependent code is just good manners. Yes the code can and has been forked, but ideally you use that for some wild new idea that needs some more work and that will be folded back in if it does. Forcing a break to maintain compatibility takes a lot of continuing effort that might be better used elsewhere.
    AFAIK, udev didn't break userspace when integrated into systemd. That people would have to patch it in order to use it independently is exactly what forking is all about.

    The udev-systemd integration didn't break compatibility for any systemd distro either. That people want to use some of systemd's code without using systemd as init is their problem.

    Let me stress that; people who don't want to use systemd, have the sole and complete responsibility of making their Linux distro work, including either to develop or fork any necessary code.

    Yes, it would be convenient for the non-systemd users if the systemd developers made all their work for them, but that is an unreasonable requirement by any standard.

    Leave a comment:

Working...
X