Announcement

Collapse
No announcement yet.

PulseAudio 15 Lands mSBC Codec Support To Enable Bluetooth Wideband Speech

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    One of the key differences in the sound of different instruments playing the same note is the shape of the sound wave. If you sampled at 20kz the best you would get is a square wave at 20khz and high notes would all sound the same. With greater sampling rates you get a more accurate recreation of the shape of the wave which can be very complex. That makes a huge difference in how it sounds especially with harmonics. Most people won't know that though because they are used to listening to MP3s where a good chunk of the sound is thrown out to save bandwidth. But if you listen to a non electronic instrument especially cymbals being played directly not recorded you will instantly tell the difference.

    As for PW I am on FC-33 but upgraded to PW and I gained some routing functionality but the bluetooth connectivity isn't quite as good as in if I try to reconnect to an existing connection it doesn't connect as often I some times have to remove the connection and recreate the connection. But I am new to BT headsets and have had issues with both Pulse and PW. My impression is that the BT protocol is just kind of flakey.

    Comment


    • #22
      Originally posted by MadeUpName View Post
      One of the key differences in the sound of different instruments playing the same note is the shape of the sound wave. If you sampled at 20kz the best you would get is a square wave at 20khz and high notes would all sound the same. With greater sampling rates you get a more accurate recreation of the shape of the wave which can be very complex.
      No, digital sampling doesn't give square waves, that is a misconception. Nyqvist sampling theorem gives that you only need twice the sampling frequency to perfectly recreate the original waveform.

      I highly recommend this video by Monty on the subject.

      Comment


      • #23
        I have prepared popcorn, for the inevitable "Pipewire sucks, Pulseaudio follows UNIX philosophy, we will never stop using Pulseaudio" war.

        Comment


        • #24
          pipewire has been working phenomenal for me on arch. its a lot easier on resources and more importantly latency. a lot of steam games where you need to do that latency thing in the launch options (PULSE_LATENCY_MSEC=90 %command%) to either get sound working or to stop cracking / popping you don't need to do with pipewire. like skyrim special edition with proton-ge i can play it now without having to do anything to the launch options. audio just works well like windows.

          even just playing audio cpu usage is slightly lower. the only fancy thing i have done with pipewire is set the sample rate to 192khz since that's what my dac supports. which is a topping e30. verified it works because of its lcd screen.

          pulseaudio did what it needed to do but unless poettering and co are willing to overhaul it, its time for something new. pulseaudio has stayed pretty much the same. while other audio stacks like microsoft has been constantly overhauling and working on theirs. it might annoy manufacturers like creative because of driver breakage but microsoft needed to. pipewire is a nice, fresh implementation that does a lot of stuff right.

          and the whole network audio stuff, that's such a niche case linux audio can't be held back because of a small niche use case. its like MPD. the overwhelming majority of people who use it just use it locally.
          Last edited by fafreeman; 06 April 2021, 04:29 PM.

          Comment


          • #25
            Adoption (or not) of Pipewire over Pulseaudio might be linked to your usage as well.

            For example, on my laptop I never output sound to another device except for Spotify, but it's a network ecosystem thing, and I barely use Bluetooth (the one time I did, I couldn't make my earbuds work with pipewire though). My use cases are fairly straightforward, so I switched to pipewire many months ago.

            On the contrary, my HTPC is hooked to a 4K TV and 5.1 speakers through an AV receiver, setup that has been trickier to manage by pulseaudio then the amdgpu driver (hdmi sound output from the GPU) for a very long time. It's much more likely I will run into issues on that system compared to my laptop. So I will stick to pulseaudio on that setup until Pipewire is more mature.

            Comment


            • #26
              Originally posted by AnAccount View Post
              No, digital sampling doesn't give square waves, that is a misconception. Nyqvist sampling theorem gives that you only need twice the sampling frequency to perfectly recreate the original waveform.

              I highly recommend this video by Monty on the subject.
              Take very simple waves a saw tooth and a reverse saw tooth. If their freqency is 20khz and you are sampling at 40 khz they will both have exactly the same sampling points. A saw tooth wave is what we call primitives when programming synths because they are so simple. Most waves are far more complex.

              Comment


              • #27
                Originally posted by MadeUpName View Post

                Take very simple waves a saw tooth and a reverse saw tooth. If their freqency is 20khz and you are sampling at 40 khz they will both have exactly the same sampling points. A saw tooth wave is what we call primitives when programming synths because they are so simple. Most waves are far more complex.
                To quote Monty, "If it differs even minutely from the original, it contains frequency content at or beyond Nyquist, breaks the band-limiting requirement, and isn't a valid solution".

                TL;DR: By clamping the input audio to 22.05kHz in the analog circuitry before sampling at 44.1kHz to digitize, you mathematically guarantee that there's only one signal that can match your sample points... the one that came into the system.

                (24-bit samples and 192kHz sampling rates are equivalent to 24-bit or 32-bit floating-point color channels in an image or video editor. They're used during mixing and mastering to minimize rounding errors during long chains of filtering and editing stages. There's no need for more than 16-bit 44.1KHz for the raw inputs and mastered outputs... especially when feeding supersonic frequencies to speakers not designed for it will do nothing at best and is likely to cause unpleasant distortion of the audible frequencies.)
                Last edited by ssokolow; 06 April 2021, 09:30 PM.

                Comment


                • #28
                  Originally posted by caligula View Post
                  How is this relevant for ordinary desktop users? You can't hear anything past 22 kHz.
                  That a presume historic design of audio systems were designed around and the bad news its not right.

                  Originally posted by AnAccount View Post
                  Not relevant for a desktop user listening to music, since as you say, we cannot hear that. But, for a general audio architecture it is important to handle audio above what we can hear, since there are pro users mixing audio. And Pipewire is also aiming for the pro users with its Jack support.

                  This is not as straight forwards as it seams. As we are getting a more understanding of humans we are learning to shock horror that we don't just hear/feel sound in our ears. Some of the reasons why some recording of music don't seam right is either missing ultrasound what is higher than normal human ears and infrasound that is lower than human ears as both effect areas of the body outside the ears.

                  Yes the presume that anything past 22kHz is not required is wrong. Please note I said feel sound not hear with a lot of ultrasound and infrasound if you play a tone and ask a human if they can hear it they will say no because consciously cannot hear it. But its a different matter when you are monitoring brainwaves and emotional state. Yes this can cause a person say that a song with ultrasound or infra-sound is better or worse than the absolute identical song without them.

                  If we are true we don't know enough about humans to have the complete list frequencies humans can subconsciously hear/feel. Any future looking audio solution need to allow for wider frequency range that we have historically used hard part is since we don't have the list of what humans can subconsciously hear/feel this does leave those design audio solutions for future with a truly unsolved problem.

                  The audio we can consciously hear was really easy to take people aside and run basic press button test for. The subconsciously heard sounds by humans are a lot harder yes monitoring brainwaves and other things also has background noise problems.

                  192kHz audio we don't even know if that covers everything humans can subconsciously hear we know its better coverage than stopping at 44Khz that equals a max generated freq of 22Khz but so is 96Khz audio. Yes this will in time require a change to headphone and speaker design when we understand more and make testing of those a lot harder. Fun when you have to test for something you cannot consciously hear.

                  AnAccount the big thing is we don't in fact know what is truly above what humans can subconsciously hear. So when we understand more future general desktop users may be expecting broader frequency of audio support and of course in this case you would expect desktop speakers and headphones to have increased in frequency range if this happens.

                  The fun of future proofing you design. The research papers that exist on the topic tell us clearly that the upper human subconscious hearing well and truly exceeds 22kHz the nightmare is none of them say where the real max is.

                  Comment


                  • #29
                    Originally posted by oiaohm View Post
                    That a presume historic design of audio systems were designed around and the bad news its not right.




                    This is not as straight forwards as it seams. As we are getting a more understanding of humans we are learning to shock horror that we don't just hear/feel sound in our ears. Some of the reasons why some recording of music don't seam right is either missing ultrasound what is higher than normal human ears and infrasound that is lower than human ears as both effect areas of the body outside the ears.

                    Yes the presume that anything past 22kHz is not required is wrong. Please note I said feel sound not hear with a lot of ultrasound and infrasound if you play a tone and ask a human if they can hear it they will say no because consciously cannot hear it. But its a different matter when you are monitoring brainwaves and emotional state. Yes this can cause a person say that a song with ultrasound or infra-sound is better or worse than the absolute identical song without them.

                    If we are true we don't know enough about humans to have the complete list frequencies humans can subconsciously hear/feel.
                    Yes, we know that people are affected by sounds way beyond what we can hear, the military have even tried to weaponize this. However, that fact is still not relevant for audio. There have been studies on the subject, and when the setup is done correct in a double blind study, no subject were showing any signs of super hearing above 22kHz. However, it is extremely difficult to setup fair tests, since as you say we react to audio beyond our conscious mind. For example, when comparing music systems, people tend to prefer the music system with a slightly louder volume (which lead us to the loudness war, and so on).

                    So even if I recognize that you have a valid point, it is still quite questionable in relation to audio.

                    Comment


                    • #30
                      Originally posted by AnAccount View Post
                      Yes, we know that people are affected by sounds way beyond what we can hear, the military have even tried to weaponize this. However, that fact is still not relevant for audio. There have been studies on the subject, and when the setup is done correct in a double blind study, no subject were showing any signs of super hearing above 22kHz. However, it is extremely difficult to setup fair tests, since as you say we react to audio beyond our conscious mind. For example, when comparing music systems, people tend to prefer the music system with a slightly louder volume (which lead us to the loudness war, and so on).

                      So even if I recognize that you have a valid point, it is still quite questionable in relation to audio.
                      There are in fact triple blind tests showing ultrasonics showing emotional effects and brain processing speed effects from the military. Generally you cannot get approval for triple blind this is where the subject does not know they are being experimented on. Of course if your emotional state or you brain processing speed is being effected what you are going think about the audio you are hearing normally is going to be different as well.

                      There are multi levels of complexity to setup the tests you have to have like headphones/speakers that going to generate something in the ultrasonic that the human body is going to generate a signal somewhere that can make into the nerve system. Current level of research we don't have the information to say this pair of headphones or speakers are going to generate all frequencies that will effect human perception.

                      That one with people tend to prefer their audio louder volume that is not exactly right. Audio compression if you measure db its not in fact louder but human views it as louder because you are moving the audio into the range human ears have better fidelity with. We don't know the best fidelity range of humans for ultrasonics.

                      There is a fairly good idea where the human fidelity range of infra-sound as in sub woofer yes good sub woofers generate some sounds your ears cannot hear because known to give effects this came out of military research result in music with them being more immersive.

                      The horrible part here as found with infra-sound with most of infra-sound if you did the infra-sound without aligned audio in human hearing range the infra-sound was basically ignored and studies with ultra-sound have shown the same thing.

                      Yes you are testing infra-sound and ultra-sound effects on humans is down right complex and insanely hard to-do fair tests. The sub-conscious stuff audio is really hard to deal with. The infra-sound stuff was over 30 years of research before there was enough study to start making informed choices in sub woofer design and so on. Remember infra-sound is a lot less audio space than ultrasound.

                      Something here to remember something like pipewire has the possibility to be like the X11 server and stick around for 30 years. So you are designing pulsewire today it pays to include support for as broad as possible due to the fact in reality the study of ultrasonic effects on humans is way less complete than infra-sound 100 years ago.

                      Yes its theory possible to take a song you like add the right ultrasonics then have you hate it but we don't have enough understanding of how ultrasonics work with humans to in fact pull this off.

                      AnAccount yes I do see this research as possible double sided. Just like how the audio compression added a lot of artefacts to modern music the older music before audio compression did not have what a lot of people call the loudness war that really not a loudness war is more how much can be compress the song into the high fidelity range of human hearing without having people hate it. Ultrasonics directly being able to make a happy song feel happier and a sad song feel sader this could have a lot worse effects. It is possible that particular Ultrasonics will be outlawed from being in music in future but those laws are not in place yet and we don't know where those ranges are.

                      AnAccount big thing we already have companies working on headphones and the like that that are really 96Khz and are able to generate properly frequencies above 22Khz yes all the way up to 48Khz and they are doing this before we have the research data to know that doing this is even safe physical for long term usage on volume let alone mental effects.(Nothing like being unsafe) At this stage most composers of music are not making audio files for this range. Please note I said most some composers are.

                      The problem is what most people would call normal audio we have trail blazers experimenting well and truly outside that what they are currently doing may come more common in the future. All we know for sure at some point pipewire may need more commonly handle audio that exceed the 22Khz limit because people may at some point have speakers and headphones exceeding this by how far we don't know yet. This is all because the 22Khz is wrong and we don't know enough to say everything above 22Khz is not beneficial..

                      There was a early mistake with infrasound were studios to remove background noise put a filter removing all infrasound humans were not able to hear with ears this resulted in some very creative hacks to fake up subwoofer feeds to attempt to put it back because that filter adversely effected the feel of the music. So we incorrectly cut off the top and the bottom of the audio problem is we don't know on the top side with Ultrasonic how much has to be put back and what should and should not be put back so lot of the current people who are attempting it are really doing wild guess work crossing fingers they do nothing wrong.

                      Comment

                      Working...
                      X