Announcement

Collapse
No announcement yet.

"PulseAudio Is Still Awesome"

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by Rallos Zek View Post
    Every time I try a Linux distro and have sound problems nine times out of ten its pulse audio and removing it makes my problems magically disappear. Pulse is pure shit, makes me miss the days of ESD and aRts.
    And ninety nine times out of hundred your sound system just works fine when you try a Linux distro, so your argument is "pure shit", according to your wording.

    Comment


    • #52
      Originally posted by TeamBlackFox View Post
      When I say blocking I/O, I mean there's no provision for multiple inputs to be handled to ALSA, it has a tendency to listen to one process at a time, which leads to very unpredictable behavior.
      Exacly. It behaves more like a driver and less like a user-land audio server. PA is here to do the user-land audio server part of the job. Sometimes ALSA problems are revealed with this setup but since without PA they're not exposed, people tend to think it's PA's fault.


      Originally posted by TeamBlackFox View Post
      Where I am now, BSD and Commercial UNIX-land, multiple sound systems is neither acceptable nor the norm. You don't simply tell someone that when as it turns our, JACK and PA don't have anywhere near the same API, so that forces a user to find a new program if the one they want to use relies on PulseAudio. So no, try again.
      Yeah, there's something called abstraction layers...
      Developers have been targeting multiple APIs and platforms for years. If your problem is choosing between JACK and PA you might be needing an architecture redesign of your app.


      Originally posted by TeamBlackFox View Post
      If I were to design a sound stack, it would consist of a kernel-space device driver which could communicate with a master set of userspace processes through STREAMS, it would have a centralised error log, autodetect device configuration with a set of sysctls to control it, and would be designed from the start to be multithreaded and capable of handling multiple sound inputs and mixing them through an optional curses based, realtime mixer module. STREAMS, in my opinion, is the only way to design this to where it doesn't suck, and uses a well-established, zero copy protocol for IPC.
      Sounds a lot like ALSA (kernel) + PA (user). How would you solve the problem of having zero copy when multiple clients with varying sample frequency and formats playing at the same time?

      Comment


      • #53
        I always have a hard time to purge and remove pulseaudio. If you have a decent card, it works better without. A decent card allows concurrent access by multiple applications, since mixing is done on the card. Most PC's come with low quality cards (single pcm). That in itself can also be ok. I use jackd for an exynos5422 based desktop and that works quite nice: replugging input and outputs, routing some outputs to the headset, and some to the amplifier. For a digital audio workstation you either use bare alsa, or jackd on alsa.
        Now there is a time you don't want to mess around and need a quick and dirty fix. That's about the only time I use pulseaudio. It comes with an audio latency though, so it is a temporary fix.
        What might be useful is to do pulseaudio on jackd for those applications that you don't want to bother with, like chrome. For now I have an alsa->jackd bridge for the browsers.

        Comment


        • #54
          Originally posted by matt_g View Post
          Anyone talking about OSS4 or doing sound mixing in kernel space (including the BSD guy above me) is either a) ignorant or b) a troll I assume both. It's clearly spelled out in a link in the very article for this thread why it's not done. I.e. Advanced mixing requires floating point ops - Linux kernel guys will not allow FP ops in kernel forget about it mixing in Linux has to be userspace code end of story. I'm happy BSD lets you use floating points ops in your kernel its not applicable to linux.
          :sigh: You didn't even read my entire post to see I'm not opposed to a userspace audio system - in fact if I was responsible for doing one I would do it with a somewhat similar setup to ALSA/PA, but I'd do several things differently to facilitate a better product that could handle the needs of all users without compromising too much in any area, and ALSA and PA and the two together have various compromises that, as a developer, I would see unacceptable. I have no opposition to userspace processes, for crying out loud IBM i, a system I rather admire for some of its unique design principles, follows a rather interesting kernel design similar to a ukernel.

          Originally posted by mdias View Post
          Yeah, there's something called abstraction layers...
          Developers have been targeting multiple APIs and platforms for years. If your problem is choosing between JACK and PA you might be needing an architecture redesign of your app.
          Well, my applications now don't even officially support Linux, because of this. I support FreeBSD and commercial UNIX in the products I am spending time on - and it serves me well. I don't think they're the end all, be all, but it works better in my case - go figure. Linux is not where people like me are welcomed it seems.

          Originally posted by mdias View Post
          Sounds a lot like ALSA (kernel) + PA (user). How would you solve the problem of having zero copy when multiple clients with varying sample frequency and formats playing at the same time?
          You know that PA isn't the only userspace audio system right? :P.

          Anyways, the way this would likely be handled by a STREAMS stack of processes, like this: Sound Dev <-> Kernel Driver <-> Master Sound Service <-> Slave Sound Service <-> Client Program <-> User

          The Master sound service would act as the interface between kernel land and user land and would be the mixing source. The slave sound service would either be the main multithreaded or multiprocess part and would take the feed from the program, use the appropriate codec to turn it into a standard PCM stream, then send it along to the master. Should be relatively quick, honestly and able to guarantee latency, but I'd need to do some serious research into it and refine the process stack used to make it as quick as possible.

          Comment


          • #55
            Originally posted by computerquip View Post

            He quit due to an overwhelming negative response from Linux developers. It's hard to tell if he was capable of such a task but some of the responses he got were some serious hatred. Some of the early comments were removed but have a taste: http://www.reddit.com/r/linux/commen...ext_generation

            EDIT: Oh and he got a massive trolling on Phoronix forums and some other forums as well.
            It makes me a little sad to hear that. He seemed to know quite a lot about audio and it's a great read seeing people go back and forth. What people don't seem to understand is that this is one guy, and he seems fairly young (like around 30). You can't expect one person to know about all of the new features of all the different audio things.

            Of course, this is coming from me, a young guy with very little to no knowledge on how linux audio works.
            Last edited by profoundWHALE; 05 June 2015, 12:10 PM.

            Comment


            • #56
              Originally posted by wagaf View Post
              DTS-MA and TrueHD are just proprietary lossless codecs (like FLAC, which is free). They don't improve/add anything over PCM as far as audio quality is concerned (PCM is just plain uncompressed audio).

              This means you will get the exact same audio if the DTS-MA track is decoded by Kodi then sent as PCM to your amp.
              Of course I prefer PCM over proprietary crap, however S/PDIF only supports stereo PCM and my amp doesn't have HDMI.
              Well, when I tried DTS(without MA) on Kodi with PulseAudio it sent stereo PCM over S/PDIF on amp and I thought this also applies for DTS. Maybe configuration was a problem. After removing PulseAudio DTS and Dolby Digital just works.

              Originally posted by wagaf View Post
              But calling PulseAudio "useless" because of this is not reasonable.
              Calling it "awesome" is also not reasonable. It has many advantages compared to pure ALSA, but it still has too many disadvantages.

              Comment


              • #57
                Originally posted by TeamBlackFox View Post
                Well, my applications now don't even officially support Linux, because of this. I support FreeBSD and commercial UNIX in the products I am spending time on - and it serves me well. I don't think they're the end all, be all, but it works better in my case - go figure. Linux is not where people like me are welcomed it seems.
                I don't really think it's a question of you being welcome or not. You are just complaining about something that looks to me like a somewhat easily solved problem by using an abstraction interface. Maybe PortAudio would suit you well; hard to tell without knowing the specifics of the problem you're solving in your apps.

                Originally posted by TeamBlackFox View Post
                You know that PA isn't the only userspace audio system right? :P.
                Sure, but since this thread is about PA and we're discussing it's pros/cons, I think it's fair we talk about that.

                Originally posted by TeamBlackFox View Post
                Anyways, the way this would likely be handled by a STREAMS stack of processes, like this: Sound Dev <-> Kernel Driver <-> Master Sound Service <-> Slave Sound Service <-> Client Program <-> User

                The Master sound service would act as the interface between kernel land and user land and would be the mixing source. The slave sound service would either be the main multithreaded or multiprocess part and would take the feed from the program, use the appropriate codec to turn it into a standard PCM stream, then send it along to the master. Should be relatively quick, honestly and able to guarantee latency, but I'd need to do some serious research into it and refine the process stack used to make it as quick as possible.
                I'll be honest and say I don't really see the purpose of the "Slave Sound Service" you describe, or how multithreading would help things here since it seems like all it does is pass the buffer to the "Master Sound Service". Seems like you have more layers in the system you thought up than what we currently have with App->PA--->ALSA->HW.

                No matter what you come up with, you will always have to implement a way to resample client streams to whatever your HW supports and is set to output.
                Most people also use onboard audio these days, and you don't get multiple voices/channels on those. And even if you did, the fact is that a software mixing+resampling solution is, most of the time, vastly superior to whatever resampling capabilities your multi-voice-multi-sample-rate sound card has. The only advantage of HW is speed, and thus low latency. That said my ALC889 can reach low latency with Jack and enabling rtirq.

                There really is no escape to separating normal users from pro users when it comes to audio.
                Normal users want it to just work, and for things to just work you need at least a system in place ready to resample your audio comming from multiple clients; they don't care about buffer sizes, periods or sample rate/format or who's mixing the audio. If you're a pro-audio user you'll just have to live with the current situation and use Jack; your software should work great with it too!

                ASIO also has limitations on windows, and it seems to be good enough for professionals...
                Last edited by mdias; 05 June 2015, 04:36 PM.

                Comment


                • #58
                  It's funny how people are getting so up in arms about PA grabbing the ALSA device when that's exactly the same thing Xorg does with the DRI device =P Nobody seems to be complaining that only one process can claim and own a graphics card device at a time..

                  Comment


                  • #59
                    Originally posted by LightBit View Post
                    Of course I prefer PCM over proprietary crap, however S/PDIF only supports stereo PCM and my amp doesn't have HDMI.
                    Well, when I tried DTS(without MA) on Kodi with PulseAudio it sent stereo PCM over S/PDIF on amp and I thought this also applies for DTS. Maybe configuration was a problem. After removing PulseAudio DTS and Dolby Digital just works.
                    Please have a look here: http://kodi.wiki/view/PulseAudio - if it still does not work, drop me a message.

                    Comment


                    • #60
                      Originally posted by mdias View Post

                      I don't really think it's a question of you being welcome or not. You are just complaining about something that looks to me like a somewhat easily solved problem by using an abstraction interface. Maybe PortAudio would suit you well; hard to tell without knowing the specifics of the problem you're solving in your apps.
                      My applications aren't targeting Linux anymore, so the point is moot. I'd probably just run them through SDL if I needed more than the BSDs and commercial UNIX.

                      Originally posted by mdias View Post
                      I'll be honest and say I don't really see the purpose of the "Slave Sound Service" you describe, or how multithreading would help things here since it seems like all it does is pass the buffer to the "Master Sound Service". Seems like you have more layers in the system you thought up than what we currently have with App->PA--->ALSA->HW.

                      No matter what you come up with, you will always have to implement a way to resample client streams to whatever your HW supports and is set to output.
                      Most people also use onboard audio these days, and you don't get multiple voices/channels on those. And even if you did, the fact is that a software mixing+resampling solution is, most of the time, vastly superior to whatever resampling capabilities your multi-voice-multi-sample-rate sound card has. The only advantage of HW is speed, and thus low latency. That said my ALC889 can reach low latency with Jack and enabling rtirq.

                      There really is no escape to separating normal users from pro users when it comes to audio.
                      Normal users want it to just work, and for things to just work you need at least a system in place ready to resample your audio comming from multiple clients; they don't care about buffer sizes, periods or sample rate/format or who's mixing the audio. If you're a pro-audio user you'll just have to live with the current situation and use Jack; your software should work great with it too!

                      ASIO also has limitations on windows, and it seems to be good enough for professionals...
                      Sound in general isn't something I'm the most skilled with, so yeah, I'm not exactly the authority here, I'll be honest, that just was my brief, 15-minute solution I drew up. Mixing/resampling capabilities in software aren't my problem, its the fact that PA/ALSA isn't sufficient for the one-solution ideal at all.

                      Redesigning a sound system to work quickly in userland with a fast, well documented protocol like STREAMS would be my ideal, including a real time sound latency guarantee from the system. In any case, as much as OSS is imperfect, it is a much, much simpler solution that is somewhat closer to an ideal vs Pulseaudio/ALSA.

                      Comment

                      Working...
                      X