Announcement

Collapse
No announcement yet.

Google Chrome/Chromium Now Supports PulseAudio

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by curaga View Post
    No, if you write for OSS your app will run everywhere. If you write for ALSA, your app will run on most linux. See how each is a subset? Pulse is a subset of the alsa group.
    Thats odd.
    Previous versions of Audacity didn't work on my Ubuntu system thanks to the OSS dependancy!
    Doh!

    OSS should die already, we are in the 10s now not the 60s.

    Comment


    • #82
      Originally posted by curaga View Post
      No, if you write for OSS your app will run everywhere.
      Wrong!

      Code:
      $ mplayer -ao oss foo.avi
      
      [b]
      [AO OSS] audio_setup: Can't open audio device /dev/dsp: No such file or directory
      Failed to initialize audio driver 'oss'
      Could not open/initialize audio device -> no sound.
      Audio: no sound[/b]

      Comment


      • #83
        Originally posted by danwood76 View Post
        Thats odd.
        Previous versions of Audacity didn't work on my Ubuntu system thanks to the OSS dependancy!
        Doh!

        OSS should die already, we are in the 10s now not the 60s.
        That's a point in favor of OSS, no deps. libasound is a lib you may not want on your system, and pulse is a lib & a daemon, worst of all.

        @AC:

        If I were a pulse fan, I'd say your system was misconfigured and discount any possibility of anything else. I take you have on purpose removed alsa's oss support?

        Comment


        • #84
          I didn't remove anything. This is just stock Ubuntu 11.04. Since Ubuntu is the most popular Linux distro I'd say the opposite of what you were saying: Avoid OSS like the plague, your app will not run on the major Linux distros.

          Comment


          • #85
            Originally posted by numasan View Post
            Well, as long as it stays out of my way, I don't mind PA. My question was based on this:



            I'm just wondering how the experience will be better?
            To answer this question, you have to know how the audio stack is architected currently.

            If you don't use PulseAudio at all (i.e., PA isn't even running on your system), you can ignore PA support and pretend like it's not there.

            If you do use PA, like 98% of users running modern distros do, native PA protocol support is extremely beneficial.

            Basically, you can't get "direct ALSA" (directly to the soundcard) at all while running PA. Well, you can, but it disrupts PA for all but a few ancient, obsolete soundcards that still support hardware mixing (or less-obsolete "audiophile" professional soundcards). So rather than stopping PA to get direct ALSA (which bypasses all the benefits of PA anyway), there's a compromise: you can call the ALSA API in your apps, but it gets redirected through pulseaudio, and then back to ALSA. So the chain looks something like:

            app -> ALSA API -> pulseaudio -> direct ALSA to hardware -> kernel -> speakers.

            The chain that Chrome's new pulseaudio support brings looks like:

            app -> pulseaudio -> direct ALSA to hardware -> kernel -> speakers

            It is fundamentally impossible for the shorter chain to be less efficient, assuming that the "ALSA API" layer speaking to the application in the first chain would have to do the same amount of work that directly speaking to the pulseaudio API implies. In fact, the longer chain is almost guaranteed to be significantly less efficient in practice, because ALSA does its own buffering in userspace which adds to latency, RAM accesses and CPU usage. Even if the "ALSA API" part of the first chain were limited to single-function call-throughs to PulseAudio, that would still incur the overhead of all those function setups and teardowns (push the frame pointer onto the stack, store local variables into registers, etc). But in reality, if you look at the implementation, what that ALSA API layer does is rather complex, because there's not a direct 1:1 mapping between libpulse API calls and ALSA API calls. Adapting the one interface to the other is an error-prone procedure that, thanks to the horrible design of the ALSA API, almost guarantees that 50% of the applications out there that proclaim to support "ALSA" will in fact not work well (or at all) with the pulseaudio PCM plugin for ALSA.

            That is why, fundamentally, most programs that wish to support pulseaudio well have done so by implementing a backend for the native PA protocol, rather than just using ALSA.

            Now if they start talking about removing the ALSA support, then you're entitled to get up in arms. But since this is a win-win for pulseaudio users and has no negative impact on people using any other sound system, there's really no reason to be alarmed at this point.

            Comment


            • #86
              Originally posted by AnonymousCoward View Post
              Wrong!

              Code:
              $ mplayer -ao oss foo.avi
              
              [b]
              [AO OSS] audio_setup: Can't open audio device /dev/dsp: No such file or directory
              Failed to initialize audio driver 'oss'
              Could not open/initialize audio device -> no sound.
              Audio: no sound[/b]
              Yes. The ALSA devs employ M$ techniques in order to force people to not use OSS.

              ALSA should really die and OSS4 be adopted. This is 2011, for heaven's sake, not 1990.

              Comment


              • #87
                Originally posted by curaga View Post
                That's a point in favor of OSS, no deps. libasound is a lib you may not want on your system, and pulse is a lib & a daemon, worst of all.
                Yes. Forcing each and every application developer to reinvent the wheel is far superior to having them all use shared libs. Because, you know, buggy and redundant software that is difficult to program and support is far superior then having libraries installed on your system.

                Comment


                • #88
                  both are shit...

                  do something like coreaudio...

                  my idea was

                  app - mixer -(+) driver - hardware - speakers

                  app is app
                  mixer should be PA
                  driver should be only part of alsa[or new stuff]

                  Comment


                  • #89
                    Originally posted by RealNC View Post
                    Yes. The ALSA devs employ M$ techniques in order to force people to not use OSS.

                    ALSA should really die and OSS4 be adopted. This is 2011, for heaven's sake, not 1990.
                    Ridiculous! ALSA developers are physically incapable of forcing anyone to do anything, since the entire ecosystem is free software.

                    If you are that adamant about transparent in-kernel OSS support alongside ALSA, then rewrite or extend snd-pcm-oss kernel module so that it ties into the userland plugin pipeline, so that at least it can take advantage of dmix or pulse PCM plugins for software mixing.

                    Of course, someone already did that but with a special FUSE-extension called CUSE (Character Device in Userspace). You can actually get a real /dev/dsp, a bona fide character device (no tricks!), which redirects to pulseaudio. It works fantastically with apps that are written to use OSS/Legacy, but it doesn't generally work with apps specifically targeting the OSS4 API.

                    oss-proxy (sometimes called ossp) does something like this:

                    the ossp userland daemon asks CUSE (a kernel module) to create a character device, /dev/dsp.

                    Then, along comes a userland app that tries to open /dev/dsp. It successfully gets a file descriptor, so it's all happy and assumes it can start executing OSS ioctls on it.

                    And they work.

                    CUSE acts as an in-kernel intermediary between the osspd userland daemon (which does all the interesting work) and the userland app, which, for all intents and purposes, is doing I/O with a character device file.

                    Meanwhile, the osspd userland daemon fires up a process running under your user account that connects to a local pulseaudio daemon and does audio I/O to it, based on what it receives from osspd. So it acts like a regular pulseaudio client on the backend, but on the frontend it has a custom interface to osspd, which itself is receiving data from CUSE through the kernel interface.

                    It's pretty brilliant, and this whole "no such file or directory /dev/dsp" nonsense argument disappears.

                    The tech is out there, it works. Use it. If you can avoid using the OSS API at all, then for the love of god, do it. But if you absolutely must use it, it's available, and it works with ALSA and pulseaudio, all without disrupting your software mixing for the "happy path" (happy path == well-behaved apps that directly call pulseaudio).

                    You get the robust hardware driver support of ALSA, combined with the robust software mixing and volume control of pulseaudio, combined with the (if necessary) legacy support for applications written for OSS3, all within one fairly cohesive stack. You can take it a step further and place a JACK daemon in between pulseaudio and ALSA, so that you can support the OSS3, ALSA (through pulseaudio plugin for ALSA), native pulseaudio, and JACK audio APIs all at the same time, all software mixed, with reasonable latency in all cases.

                    Everyone always fights "this API is better because blah blah" or "I don't like all this overhead" or "if I use this then it prevents me from using that". All of these arguments are irrelevant. Use the Universal Audio Stack. Run JACK and Pulseaudio at the same time and also support apps that use the ALSA API and OSS3 API. You can stop worrying about "will this app produce sound when I start it?" because the answer will always be yes. And you don't have to write any new code; it's all out there for you.

                    Comment


                    • #90
                      Originally posted by allquixotic View Post
                      You get the robust hardware driver support of ALSA, combined with the robust software mixing and volume control of pulseaudio, combined with the (if necessary) legacy support for applications written for OSS3
                      More like OSS 1. You know, the outdated zombie one.

                      Comment

                      Working...
                      X