Announcement

Collapse
No announcement yet.

KLANG: A New Linux Audio System For The Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by dimko View Post
    Let me remind you principle that ANY engineer follows, IF IT'S AINT BROKEN - DON'T FIX IT!
    Funny, considering that according to a post of the KLANG author, ALSA is currently broken from an engineering point of view.
    Read it if you can (it?s German, maybe some translation tool helps): http://www.heise.de/open/news/foren/...22197656/read/

    Comment


    • #32
      I am an ALSA user, and while I don't see anything wrong with it, I see what this project is trying to address - putting hardware management back into the kernel. The same thing happened with video (KMS), and people were initially complaining, but now everyone is happier. Furthermore, if it is going to be transparent to OSS-supporting apps, it will be a drop-in replacement. I mean, ALSA supports OSS-compatible apps, Pulse supports ALSA and OSS... This is more of a "reworking the plumbing" than "reworking the dashboard".

      Comment


      • #33
        Alsa and PA

        Why not just port Alsa or PA into the Kernel?
        Alsa and PA are working really fine or at least for me.
        And it seems OSS4 ain't really popular here.
        So what are the technical implications and why isn't he doing it?

        Comment


        • #34
          Originally posted by PaulDavis View Post

          Paul could you also answer why mac os x does a better job with a single "stack", why we can't have the same in linux and also why there is no effort trying to merge jack and PA (lack of manpower was the reason last time i heard about it).

          Comment


          • #35
            Originally posted by PaulDavis View Post
            Wow, Paul. i didn't even realize you visited/post @ phoronix.

            Thanks for posting this information. I was aware of the floating point problem, as well as re-writing drivers problem, but others i had not really thought about too in depth. it's also nice to hear the perspective of someone who is in the know, and as knowledgeable about this stuff as you are. thanks

            But, regardless of reading your post @ ardour.org ~ i think KLANG is generally a stupid idea. We don't need yet another sound-system for linux. if there are problems with ALSA, PA and Jack - it seems for more reasonable and realistic to work on those instead.

            I did also find Ben l0ftis (from harrison consoles) comments interesting;

            Originally posted by ben loftis
            I'd like to reiterate a few points based on our experience with Linux audio at Harrison.

            We have a product called Xdubber that was developed 5 years ago. The Xdubber uses a custom JACK driver (no ALSA). The system operates with 8-sample buffers ( compared to the much more common 1024 or, at best, 64 samples provided in most systems ) for extremely low latency. We send 64 channels of 96kHz audio in and out of the system. I have tested this system for days using an Audio Precision bit-test with error-checking turned on.

            Our findings with an actual commercial product have shown that:
            *JACK has a very minimal CPU/memory footprint.
            *JACK has nothing to do with XRUNS. it reports xruns that happen in the driver and/or application level which otherwise would go unreported.
            *There is no fundamental issue with the coding style of JACK. It's just very hard work to d this kind of plumbing.
            *An ultra-high performance, ultra-low latency system can be built with JACK

            I've had similarly good experiences with the best-implemented ALSA devices, such as RME and older M-Audio.

            There ARE a lot of issues with linux audio. But they stem mostly from the unbelievably wide range of use-cases between applications and devices. And the fact that Linux users actually have higher expectations of their audio system .... for example Windows still doesn't have the concept of virtual MIDI ports, much less inter-application routing. OSX's CoreAudio is better but still lacks fundamental features of ALSA and JACK.

            -Ben Loftis
            Harrison Consoles
            i thought i'd post his comments for those who are lazy, or missed Paul's original post.
            Last edited by ninez; 31 July 2012, 12:40 PM.

            Comment


            • #36
              Originally posted by RealNC View Post
              The OS X kernel has RT facilities. But you're missing the point: you don't need RT for this. The reason why RT is used in Linux for audio just shows the problems the audio stack has. RT is not needed, unless you're doing something wrong in the audio infrastructure.
              Given the fact that the audio subsystem works into userspace,
              if you need a latency around 5ms when the kernel has to process tons of events (interrupts), then you need real time responces for the audio thread and the kernel needs to be preemptible.

              Standard linux can't do it right now, even by tweaking granularity settings:

              ...and/or givin a process a real time priority RR/FIFO

              So if the video driver at kernel level is interrupt driven, and block it hardly during an effect, the audio skips is perfectly normal.

              Building a preemptible kernel via rt patchsets or having the audio subsystem at kernel level will work.

              Comment


              • #37
                Normally, I'd be interested, but now that we have all these different Linux audio standards this generally feels pointless. The guy does have good points of wanting to put all the audio stuff into the kernel, but is it really that necessary?

                I'm no audio engineer or musician, but isn't PulseAudio doing a fairly good job as far as low latency goes? It had problems when it was first introduced, but not so much anymore from what I understand, and generally many people just seem satisfied with it. It should be good enough for games at least, right?

                And audio engineers, like many others mentioned here, can simply use JACK and a low latency kernel if they need that low of a latency for their stuff.

                Comment


                • #38
                  You know...PulseAudio I despise the extra daemon running in the background. Oh.. *Idea* lets hide the whole shebang in the kernel! that way they can't see it right off... You know it really would be better if the parts that have to be running all the time got merged into the kernel and the parts that don't stayed in userspace. The problem with this is audio problems are now harder to fix as it increases the likelihood of requiring a kernel recompile.

                  Clearly you can use FP in the kernel http://www.linuxsmiths.com/blog/?p=253 I can see how there could be drawbacks however if you tried doing alot of FP in the kernel.

                  Comment


                  • #39
                    Originally posted by peppepz View Post
                    Even if his chances of seeing the community accept broadly yet another sound system are thin IMHO, if this guy could manage to create a sound system that is

                    1) simple in its architecture (no self-sentient userspace daemon required);
                    2) simple in its usage (zero configuration required at least to put the audio hardware in a sane state);
                    3) moderate in its requirements (a pentium 4 should be able to play an mp3 without suffering at all);

                    then it might be a good thing.
                    Shouldn't a pentium 133Mhz be able to play an mp3 fluently?

                    Comment


                    • #40
                      Originally posted by 89c51 View Post
                      Paul could you also answer why mac os x does a better job with a single "stack", why we can't have the same in linux and also why there is no effort trying to merge jack and PA (lack of manpower was the reason last time i heard about it).
                      Its complicated. OS X does a better job with a single stack because it has very clever design that we unfortunately missed (read: didn't know about) back in the early days of ALSA. i don't feel so bad about this - windows didn't get anything remotely resembling this idea till years later either. The crux of the matter is that CoreAudio is NOT dependent on the flow of interrupts from the device to be able to know with fairly high precision where the current audio interface read and write locations are in memory. This means that applications with entirely different latency requirements can write into a shared memory buffer without requiring that they all do so synchronously (the way that JACK requires). An application that wants to write 8 samples at a time can do so, and at the same time, another application that wants to deliver 4096 or even 64k samples at a time can also do so - CoreAudio can ensure that both of them end up putting audio in the "right" place in the mix buffer so as to match their latency requirements. It does this using an engineering design called a delay locked loop (DLL), which actually makes the latency slightly worse than on Linux - CoreAudio drivers have a "safety buffer" to account for possible errors in the estimates that the DLL provides for the current read/write pointer locations, and this adds to the overall latency. Its fairly small though - typically 8-32 samples. Now, there is nothing fundamental stopping ALSA from adopting this kind of design. But nothing stopping it isn't the same as making it happen - that would require a significant engineering effort.

                      In addition, even without this, you still have to decide where sample rate conversion and sample format conversion will take place - in user space, or in the kernel. OS X does not have a prohibition on floating point in kernel space (which is contributes a teeny bit to why their kernel is slower than linux for many things). So historically, they have done some of this in the kernel. they won't talk much about the details, but it appears that in recent OS X releases (Lion & Mountain Lion) they have moved away from this and now have a user space daemon that is conceptually similar to Pulse and/or JACK through which all audio flows. this is slight speculation - you can see the server running with ps(1) but apple have never said anything about it. Its also not clear whether the shared memory buffer into which applications ultimately write their audio data is in user space or kernel space, and this may also have changed recently. The key point is that even with the DLL-driven design in the kernel, there are still tricky, fundamental aspects of API design that you have to tackle, and that even on OS X, the answers are not fixed in stone.

                      interestingly note that PulseAudio has (or was going to have) this DLL-driven design too - lennart calls it "glitch free" - but adding to pulseaudio (or JACK) doesn't do anything what goes on at the ALSA layer.

                      as for merging JACK + PulseAudio, manpower remains an issue, but more importantly, the goals of the two projects are not really that similar even though to the many idiot-savants who post to reddit and slashdot, they sound as if they should be. there are ways that it could happen, but it would require a huge level of desire on the part of all involved, and given the difficulties we have internally with two different jack implementations, it just seems unlikely.

                      Comment

                      Working...
                      X