Announcement

Collapse
No announcement yet.

PipeWire Audio Backend Comes To QEMU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    wonder if this lets us run ableton live inside a vm with lower latency. since current solutions are something like 8-12ms

    Comment


    • #12
      but getting back to the question why we need native pipewire protocol implemented on the client apps. well to transition to a 'pull' architecture. the way the buffering and interrupts / dma works we want the final endpoint hardware consuming the audio to be calling the shots. so it should be the one to ask for the next buffer originating from the output side calling backwards through the whole audio chain. and only pipewire can do this pull architecture on linux right? so thats the whole point of it. at least this has been my understanding ever since learning about pipewire. of what is the actual benefit to have a native pipewire chain without translation to any other protocol (pulse, jack).

      now perhaps not everybody is going to notice much for their specific low quality and pretty ignorant use cases. but that does not mean there aren't tangible benefits for others. in particular mostly for realtime audio processing at lowest possible latencies, overheads and jitter. and 2nd to that a full pull architecture might also make for a more power efficient audio pipeline on a mobile platform (for laptops, phones and tablets). to give a bit longer battery run times. for the millions of us who just want to put spotify on all the time

      now clearly, in the year 2023 there is very little expectation to deprecate all this pulse audio stuff floating about. however pipewire as a core infrastructure platform should be intended to last maybe 20 years right? so then in the long term, by 2040 or some timelines far out. then native pipewire protocol should eventually become the majority case for most popular client side apps. just by the general nature of the way a lot of our software ends up getting replaced over time. probably not even that long really tbh.

      Comment


      • #13
        I'm waiting for wayband, a new wayland inspired protocol for audio, DBM (Direct Buffer Manager) should allow for perfect buffer samples at 48KHz (similarly how wayland locks you to 60fps), then swayband, a fork of sway, will implement the new wayband protocol, all desktop apps will need to be re-written for audio to work, there should be an uredirected buffer output for compositors (that support it) to allow up to 192KHz (to enjoy what you paid for) though the samples won't be pitch perfect. After the inception of wayband, it should take ~15 years for necessary extensions to be added to wayband protocol (to support features like bluetooth audio) or S/PDIF that the authors of wayband did not deem necessary during development of the protocol specifications.

        Comment


        • #14
          Originally posted by dreamcat4 View Post
          wonder if this lets us run ableton live inside a vm with lower latency. since current solutions are something like 8-12ms
          I'm not sure if it will, I don't see why there would be any lower latency then using jack for instance. you might be able to use a shared memory device or socket and pump audio over that?

          Comment


          • #15
            Looks like the github issues were hidden away once they moved to Gitlab. This was something I asked about back in 2017, so it's awesome to see

            Created by: polarathene When ready, PipeWire will be able to use the current PulseAudio support in QEMU right? Would that support potentially benefit from the...


            Still unclear if it allows supporting more than stereo (which the pulseaudio backend was constrained to), but if not I guess for Linux guests you'd just workaround that via network to the host instead of an audio device using this backend?

            I haven't tried audio via QEMU for a long time, but if this pipewire backend better handles avoiding the audio issues I mentioned in the feature request, that'd be fantastic

            Comment


            • #16
              Originally posted by polarathene View Post
              Looks like the github issues were hidden away once they moved to Gitlab. This was something I asked about back in 2017, so it's awesome to see

              Created by: polarathene When ready, PipeWire will be able to use the current PulseAudio support in QEMU right? Would that support potentially benefit from the...


              Still unclear if it allows supporting more than stereo (which the pulseaudio backend was constrained to), but if not I guess for Linux guests you'd just workaround that via network to the host instead of an audio device using this backend?

              I haven't tried audio via QEMU for a long time, but if this pipewire backend better handles avoiding the audio issues I mentioned in the feature request, that'd be fantastic
              looks like 8 channels after a quick grep, might be wrong though, I plan on compiling and testing it within the week

              EDIT: 8 being 7.1 since one LFE
              EDIT2: with how pipewire works, you could easily just create a couple of sinks, multiple audio devices, and conjoin them using a wireplumber config if you need more channels
              Last edited by Quackdoc; 09 May 2023, 08:44 PM.

              Comment


              • #17
                Originally posted by NSLW View Post
                This Fedora background looks really nice.
                Yeah, this was my personal favourite; it got replaced by those ugly balloons and never really recovered https://web.archive.org/web/20230422...iki/Wallpapers

                Comment


                • #18
                  Originally posted by Anux View Post
                  So this is basically an alternative for -audiodev pa​?
                  PulseAudio... alternative to? I've been specifically avoiding PulseAudio for a decade.

                  Comment

                  Working...
                  X