Announcement

Collapse
No announcement yet.

Lennart Poettering On The Open-Source Community: A Sick Place To Be In

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by finalzone View Post
    It is actually default configuration from Fedora Workstation 21 Alpha. My system uses an AMD Phenmon II X4 940. Do you have a sample 48kHz sound.

    I had to create .asoundrc because it is unavailable on $HOME folder. So far, I had a failure with that setting.

    Yes, I wonder how you did not participate to PulseAudio and propose your suggestion.
    y,
    i have 44.1, 48 and 96kHz music
    thing is almost all sound cards can output at only one sampling rate at a time,
    meaning that all sound has to be resampled to that frequency before being sent to the card (and bit-depth)

    if that direct .asoundrc causes a failure, then something is wrong
    probably it is PA holding the sound card open so the player can't change the rate or format
    you can check that with "cat /proc/asound/card0/pcm0p/sub0/hw_params" (should be closed)

    as for why i didn't
    well, i don't like the idea of a sound server as it introduces a minimum delay (the size of the app-server ringbuffer + other stuff)
    and there was already JACK, that did it properly
    it just had/has a UI aimed at professionals, that is too much data for new people

    anyway; i went and tested all that
    i used sox to play a 44100kHz, 16bit, stereo flac file
    i have a 3.5kHz cpu using performance... mode and without turbo-boost

    sox using the direct method uses less then 0.5%
    Code:
    bash-4.2# cat /proc/asound/card0/pcm0p/sub0/hw_params
    access: RW_INTERLEAVED
    format: S16_LE
    subformat: STD
    channels: 2
    rate: 44100 (44100/1)
    period_size: 2048
    buffer_size: 16384
    note the period and buffer sizes, as that is the delay/latency

    jack with default settings uses around 0.5% cpu
    sox then uses ~1.4%
    since the default sampling rate for jack is 48kHz, changing it to 44.1kHz lowered sox cpu usage to 0.5%
    Code:
    bash-4.2# cat /proc/asound/card0/pcm0p/sub0/hw_params
    access: MMAP_INTERLEAVED
    format: S32_LE
    subformat: STD
    channels: 8
    rate: 48000 (48000/1)
    period_size: 1024
    buffer_size: 2048
    and all that with a period size of 1024 frames
    bdw, "access: MMAP_INTERLEAVED" means "zero-copy" in alsa speak

    PA uses ~0.6% cpu
    sox with it uses the same as with [email protected]
    PA default is 44.1kHz so that's expected
    Code:
    bash-4.2# cat /proc/asound/card0/pcm0p/sub0/hw_params
    access: MMAP_INTERLEAVED
    format: S16_LE
    subformat: STD
    channels: 2
    rate: 44100 (44100/1)
    period_size: 88200
    buffer_size: 88200
    note the difference in the period sizes of PA and JACK
    i don't even understand where it got that value from
    it's not divisible by 1024 nor 44100

    in order to figure it out i went to change the daemon.conf
    what i found there was two settings that i guess are used to calculate the period size and number
    Code:
    ; default-fragments = 4
    ; default-fragment-size-msec = 25
    changing them had no effect on buffer size
    default is 4, i guess periods (JACK's is 2)
    size is 25ms that gives 1102.5 frames, so it is... imprecise and not aligned to a page in memory
    (page is 4k so a period of 1024 frames using 16bit stereo would fit exactly in it)
    i guess in order to get actual data that confirms the settings would require asking it over dbus or something
    around that point is when i gave up trying to configure PA to test other use cases

    as for the actual minimum software latency introduced by those 2 servers
    i can not calculate it since PA uses weird ring buffer size
    (i guess it plays with the alsa's internal pointer to effectively make a ring buffer in that memory)
    but i guess by the period count (and calculated size) that it is about little more then 2X of JACK, if they are bout left at default
    as a reference 1024 frames = ~ 23.2 ms, this is happily pointed out by qjackctl when setting it (qjackctl actually gives 46.4, the whole buffer latency)

    and am too lazy to dig around for an old microphone cable to cut (or to hack my simple player to be a test)


    PS players can also try to use mmaped alsa interface
    they won't get it all the time, and it will be resampled then
    Last edited by gens; 13 October 2014, 04:12 PM.

    Comment


    • Originally posted by gens View Post
      as a reference 1024 frames = ~ 23.2 ms
      also to point out
      average human response time to visual stimuli is 210ms (you can test yourself here)
      while the avg human response time to audio stimuli is ~150ms
      at 60fps the delay between frames is 16.666.. ms
      the human eye is at about 60fps (up to 75)
      xorg introduces, if i remember correctly, 5ms input delay on top of everything else (like the mouse pull rate, for example)
      and your brain compensates for all that, among other things

      so much about latency
      (and no, i won't discuss anything said here except xorg; as it is the only thing i'm not sure about)

      Comment


      • Originally posted by interested View Post
        Like what? The only core components in systemd is the init/process controller (systemd), udev, and journald. And yes, even those can be removed too, which is explained in the docs. The sNTP client isn't part of the init system, something you would have known if you have read the documentation, but is an optional super lightweight daemon added especially for supporting OS containers (look it up, it is really cool tech),

        Same with the dhcp client; it is ultra lightweight and extremely fast, something that is important when you boot up many OS containers or computing nodes in parallel. The have all been added on request from systemd end users who made a user case for their inclusion.

        If you want to use another sNTP or dhcp client it isn't a problem at all. Do you want old legacy style text logs, just use rsyslog as always, and journald will act as a syslog helper client. Insisting on starting your daemons with a SysVinit script? No problem either for systemd.

        AFAIK, there are no serious maintained alternatives to udev or logind on Linux, except two code forks. There just seem to be almost zero interest in developing any alternative code to systemd's.
        mdev - Not user-friendly but it works and is suitable to use in serious (though usually embedded) applications.

        consolekit + a DM. Yes, it's not actively maintained, but still works just fine.

        Anyways the problem isn't with udev, it's a good bit a code that does it's job. Making systemD a hard dependency is what was controversial, rather than abstracting optional api's for new functionality that systemD wanted to add to udev that would then be potentially available for other init systems. Logind seems very usefull if you ever want to play around with multiseat.

        Comment


        • Originally posted by TeamBlackFox View Post
          ....

          I think that Poettering would be a REALLY good Windows developer, but he'd have to learn C# and .NET then. And that is what he has done in effect with most of his applications - reimplemented Windows concepts as programs. Systemd is basically the entire default service lineup for Windows. It also takes some annoying traits from Windows, such as the wonderful use of binary stacks for storing logs. The Registry is cack as it is, and that is effectively an entire OS in binary stacks.

          The Linux kernel has been hijacked from its original intent - from a POSIX compliant, open source kernel, to a Windows or Mac desktop replacement. However, the fundamental differences between Windows, Mac and the UNIX family and clones is so divergent that none of them should, ideally, intersect. Consider that Windows and Mac have their own centralised configuration stacks, the Registry and the /Library. The closest blood descendants of the Bell Labs UNIX don't have this, they instead use plain text config files located in the directory tree for their application. Some versions such as AIX have tried to do this their own way, using smit for example, but in practice you soon realise it is faster to open vi and edit the config file manually, especially for users like myself with symptoms of RM injury, which is aggravated by excess mouse use. People like Poettering have tried to make GNU/Linux into an open source version of Windows, and fail continuously at this goal for a few reasons:

          The components were never designed for this purpose
          The components will never be unified ( BSD is userland, kernel, shell and utility unified, systemd is userland-only, therefore it remains a disjointed, monolithic yet modular mess, and the Linux kernel isn't tied to GNU, so it won't ever be unified )
          The components lack proper quality control and usability that you find in a commercial OS.

          As GNU/Linux continues its fall from grace, I'm wondering what the future holds. OS X fell from grace during the POWER - Intel transition and the Leopard-Snow Leopard, Windows was never graceful.

          ...
          Systemd keeps configuration files as text. The advantage would be a program distributor only needs to write one of them instead of one for each distro. Standardization isn't bad so long as the standard is well-considered.

          Binary logs let you make guarantees and provide features a text log can't. Systemd gives you binary logs as an addition to rather than a replacement of text logs anyways. Additionally the feature of systemD allow you to log early boot.

          And of course Linux isn't going to be Windows, it's going to be better, and in many respects this disjointed and slippery ecosystem is what drives a lot of innovation and experimentation that ultimately improves the stack.

          Comment


          • Originally posted by JS987 View Post
            SystemD has more features like D-Bus activation, socket activation which means it has bigger attack surface.


            D-bus activation from what I can tell moves security policy from policykit to dbus, seems just to be a different attack service.

            Socket activation doesn't seem to me to be particularly security sensitive as mainly it provides message ques or IPC for a process that may not yet be running. In fact I can see a security advantage as it allows services to update without stopping, and catches messages should a deamon lock or halt. Yes you have more things that need to be correct, but in this case they are definitely worth doing and they don't seem particular problematic securitywise. Perhaps you'd like to explain why it's a security risk other than additional (but useful) complexity.

            Comment


            • Originally posted by WorBlux View Post
              mdev - Not user-friendly but it works and is suitable to use in serious (though usually embedded) applications.
              Can it even automount an e-sata device on a multi-user system? I would definitely label mdev as a "tiny udev alternative for embedded devices", not a serious alternative to udev.


              Originally posted by WorBlux View Post
              consolekit + a DM. Yes, it's not actively maintained, but still works just fine.
              No it doesn't, bugs can't be fixed and upstream projects are having an increasingly difficult time with supporting CK. AFAIK, you already have to live with limited functionality when using CK on several DE's.
              There is no security work for CK, no security mailing list, no structure for distributing patches or security alerts, in short, no upstream.

              Another problem is of course, to make developers start coding support for CK in new projects, when they know it has been deprecated for years, and by now must be seen as permanently dead with no hope of anyone ever taking responsibility for it.

              All open source projects are stretched to the limits with man power, so it is quite understandable that e.g. KDE didn't include CK support for their new login manager. Of course, the systemd opponents doesn't step up either an offer to code the necessary CK support either.


              Originally posted by WorBlux View Post
              Anyways the problem isn't with udev, it's a good bit a code that does it's job. Making systemD a hard dependency is what was controversial, rather than abstracting optional api's for new functionality that systemD wanted to add to udev that would then be potentially available for other init systems. Logind seems very usefull if you ever want to play around with multiseat.
              No, the problem is that people seem to think they own other peoples code, and that they can make demand on what open source developers do with their code.

              That udev moved to the systemd project was demonstrably the right thing to do; it went from being a mostly solo project, to become part of one the biggest open source developer communities in the world. Both projects were a perfect match for what each project wanted.

              For the tiny minority that doesn't want to use systemd, it is entirely up to you make all the necessary code to make it happen. Fork away systemd code if you are unable to make your own, but don't blame systemd developers for not working for free for you to make it work.

              Comment


              • Originally posted by gens View Post
                yes, there is nothing new
                what i was doing with that piece of text was comparing doing something with nice and doing something with cgroups
                as i nicely calculated was that giving a nice of 20 to the "background" process gives exactly the same effect as putting it in a cgroup and limiting it to 1.14%
                even better since it does it by a ratio of current cpu usage, not by a rolling average
                To me it looks like you misunderstand how "nice" work. "nice" is all about yielding for higher priorities and thereby putting "soft" limits on the process. But just because a process has a low nice value, doesn't mean it won't hog CPU time if it can.

                Try running "stress" on a fairly idle system. It generates CPU load (should be in most distro repos, or something similar cpuburn?).

                renice the "stress" process to 19 and watch "top". Try renice "stress" to -20 and watch top again. As you can see, "stress" will hog +95-100% cpu time, even if it has the lowest nice priority as possible. This is a good thing in some scenarios, in others, not so much.

                What cgroup can do on a systemd box is to put on a _hard_ limit on cpu time, like 25%. No group of processes marked as such can ever take more then 25% cpu time on a single cpu/core.

                What could be a cool idea is, that when someone launch a Wayland window to play a game, then all other windows will get close to zero percent resources. That way there is no reason to shutdown that 25 tab Firefox browser that burns away cpu time before gaming; the system dynamically allocate resources and resource limits and make sure nothing else disturb.

                Comment


                • Originally posted by interested View Post
                  Can it even automount an e-sata device on a multi-user system? I would definitely label mdev as a "tiny udev alternative for embedded devices", not a serious alternative to udev.
                  Yes, and it can even auto-unmount if you edit the sudoer's file.
                  I think the bigger issue is a lot of programs depend on udev directly (chrome/chromium browser, libatasmart, guvcview, udisks, bluez have hard dependencies, and another dozen or so with optional support (and this is on a system I try to keep fairly minimal for my needs) so it's definitely not an option on the typical desktop

                  Originally posted by interested View Post
                  ...
                  No, the problem is that people seem to think they own other peoples code, and that they can make demand on what open source developers do with their code.

                  ...
                  Linus at least is really big on not breaking userspace.

                  Not breaking other dependent code is just good manners. Yes the code can and has been forked, but ideally you use that for some wild new idea that needs some more work and that will be folded back in if it does. Forcing a break to maintain compatibility takes a lot of continuing effort that might be better used elsewhere.

                  Comment


                  • Originally posted by WorBlux View Post
                    Yes, and it can even auto-unmount if you edit the sudoer's file.
                    I think the bigger issue is a lot of programs depend on udev directly (chrome/chromium browser, libatasmart, guvcview, udisks, bluez have hard dependencies, and another dozen or so with optional support (and this is on a system I try to keep fairly minimal for my needs) so it's definitely not an option on the typical desktop
                    Well, my point was exactly that mdev wasn't a serious alternative to udev, which you seem to confirm here. Sure, it may be, but it would require a developer community that doesn't seem to exist.


                    Originally posted by WorBlux View Post
                    Linus at least is really big on not breaking userspace.

                    Not breaking other dependent code is just good manners. Yes the code can and has been forked, but ideally you use that for some wild new idea that needs some more work and that will be folded back in if it does. Forcing a break to maintain compatibility takes a lot of continuing effort that might be better used elsewhere.
                    AFAIK, udev didn't break userspace when integrated into systemd. That people would have to patch it in order to use it independently is exactly what forking is all about.

                    The udev-systemd integration didn't break compatibility for any systemd distro either. That people want to use some of systemd's code without using systemd as init is their problem.

                    Let me stress that; people who don't want to use systemd, have the sole and complete responsibility of making their Linux distro work, including either to develop or fork any necessary code.

                    Yes, it would be convenient for the non-systemd users if the systemd developers made all their work for them, but that is an unreasonable requirement by any standard.

                    Comment


                    • Originally posted by interested View Post
                      To me it looks like you misunderstand how "nice" work. "nice" is all about yielding for higher priorities and thereby putting "soft" limits on the process. But just because a process has a low nice value, doesn't mean it won't hog CPU time if it can.

                      Try running "stress" on a fairly idle system. It generates CPU load (should be in most distro repos, or something similar cpuburn?).

                      renice the "stress" process to 19 and watch "top". Try renice "stress" to -20 and watch top again. As you can see, "stress" will hog +95-100% cpu time, even if it has the lowest nice priority as possible. This is a good thing in some scenarios, in others, not so much.
                      no, i didn't
                      and stop being smart

                      the kernels scheduler calculates the time it will give to a process
                      in advance, every "epoch"
                      in a scenario with only 2 processes running a process with a "weight" a hundred times greater then the other will get 99% of the time of the epoh
                      SO STOP THE BULLSHIT THINKING

                      processes don't have "gears", they don't have a "speed" and they do not have a limit
                      except a run time limit, in an epoch
                      so if the process with the lower nice yields, the process with the higher nice will get more time
                      AND THE PROCESS WITH THE LOWER NICE WON'T EVEN NOTICE IT

                      do i have to repeat the same thing 10 times ?

                      you go run some tests
                      you don't need a "stress" utility, you can use "dd if=/dev/urandom of=/dev/null" (add & and run for every cpu core)



                      a great example is me playing dota2
                      dota 2 is far from an optimized game (they all are)
                      my cpu is about on the balance of running it good

                      i played a couple rounds and after i exited, guess what
                      turns out i forgot to turn off the lite-coin miner
                      it tried to use 100% of all my cores, and i didn't even notice it
                      why don't you explain that to me ? (don't)


                      and again;
                      YOU CAN NOT LIMIT FIREFOX MEMORY USAGE
                      it either uses a lot of memory, or it dies
                      simple as that

                      also there is a scenario where i want to play a youtube video while playing dota, or have voice chat or even a video call on a diff monitor
                      i don't want that youtube/chat to stutter just because someone was smart to put it in a fucking cpu limited sandbox

                      i told you that i'm done with this bullshit reasoning
                      YOU make a test that backs up your case, or at least think a little and stop with the bollocks
                      Last edited by gens; 15 October 2014, 10:17 AM.

                      Comment

                      Working...
                      X