Announcement

Collapse
No announcement yet.

PulseAudio 15 Lands mSBC Codec Support To Enable Bluetooth Wideband Speech

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    I just set up my Sony wireless buds with different Bluetooth adapters on Linux and honestly, they all stunk. The first one didn't do HCI, second one did and sounded awful. Weird pitch changes at startup, constant dropouts. I just ordered a Asus HCI BT v5 adapter to see if that helps. Using same buds with LG V30 phone and a HP laptop (Windows), they sound better and had no issues.

    According to the bluetooth client, the signal was strong, but the bitrate was not so good even though I was no farther away from the adapter than I would if I had used my phone.

    Clearly the bluetooth adapter plays a huge role in the quality of the link, which impacts the sound.

    Comment


    • #32
      Originally posted by caligula View Post
      How is this relevant for ordinary desktop users? You can't hear anything past 22 kHz.
      Probably more like 16-18kHz if you are more then 25years old. However by Nyquist, you need at an absolute minimum a sampling frequency of twice that value to encode that value, hence 44.1kHz or 48kHz. However, there are many people that want higher resolution audio for various technical reasons. For non-technical reasons, I don't think we will settle the hi-res audio debate here.

      Many people have at least some high-res audio files now. Currently, the way play them in Linux is to connect the music-player to a DAC via ALSA. PA in the mix becomes a bottleneck quickly and really limits what you can do. There are also non-professional audio streaming and recording use cases where PA is a pain in the butt and so is getting jack play nicely with PA. Also, this makes the use of say a raspberry PI more difficult as a streamer or for DIY electronic instruments.



      Comment


      • #33
        Originally posted by AnAccount View Post
        No, digital sampling doesn't give square waves, that is a misconception. Nyqvist sampling theorem gives that you only need twice the sampling frequency to perfectly recreate the original waveform.

        I highly recommend this video by Monty on the subject.
        Respectfully, I don't think you understand the Nyquist sampling theorem: it only states that to sample a sine-wave of frequency f, you need 2f sampling frequency.
        The rest is the application of Fourier series, which says that every periodic signal can be decomposed into a series of sinusoidal components.
        If you combine the two, 48kHz audio sampled waveform allows you to encode an audio signal that has a fourier signal that has the highest harmonic a 24kHz.
        There are a couple of problems with this
        - Real audio has infinite overtones, i.e. higher harmonics. It is up to you to decide whether they matter; psycho-accoustics etc.
        - In practice, you need analog filters to cut off way before the sampling frequency. Otherwise you get residual artifacts: aliasing. Steep analog filters tend to mess up the phase of a signal.
        - Real audio (as opposed to test tones) is not a periodic waveform. This breaks one of the assumptions for applying Fourier in the first place. While some analysis is still applicable, this simply means that there is a time-dependent component that cannot be fully analyzed in the frequency domain, e.g. a step response.

        If any of this matters to you will depend a lot whether you have a speaker and audio chain that can play anything above a 20kHz tone and your age.

        IMO, Linux could really use an audio sub-system that let the user decide how nuts they want to be.
        Last edited by mppix; 07 April 2021, 12:44 PM.

        Comment


        • #34
          Originally posted by mppix View Post
          Respectfully, I don't think you understand the Nyquist sampling theorem: it only states that to sample a sine-wave of frequency f, you need 2f sampling frequency.
          The rest is the application of Fourier series, which says that every periodic signal can be decomposed into a series of sinusoidal components.
          If you combine the two, 48kHz audio sampled waveform allows you to encode an audio signal that has a fourier signal that has the highest harmonic a 24kHz.
          There are a couple of problems with this
          - Real audio has infinite overtones, i.e. higher harmonics. It is up to you to decide whether they matter; psycho-accoustics etc.
          No, it is not up to anyone to decide. You would have to be a dog or bat to hear some of these harmonics above ~20kHz.

          Originally posted by mppix View Post
          - In practice, you need analog filters to cut off way before the sampling frequency. Otherwise you get residual artifacts: aliasing. Steep analog filters tend to mess up the phase of a signal.
          This is the only one of your arguments that actually holds some water and the real reason why it can make sense to sample well above double the signal bandwidth. However, it seems many people overestimate their own ability to tell a difference between a well authored 44.1ksps and 192ksps signal in a blind-test...

          Originally posted by mppix View Post
          - Real audio (as opposed to test tones) is not a periodic waveform. This breaks one of the assumptions for applying Fourier in the first place. While some analysis is still applicable, this simply means that there is a time-dependent component that cannot be fully analyzed in the frequency domain, e.g. a step response.
          Well, you do not seem to understand the Nyquist sampling theorem very well yourself it seems :-) Even a step response can be perfectly reproduced within the bandwidth limit.
          Last edited by Veto; 07 April 2021, 06:47 PM.

          Comment


          • #35
            Originally posted by Veto View Post
            No, it is not up to anyone to decide. You would have to be a dog or bat to hear some of these harmonics above ~20kHz.
            This is not 100 percent true. A particular correction for the medical issue of glue ear to cover for ear damage results in being able to hear from 23-24kHz in ear. This is particular bad when someone blows 100db dog whistle next to you as I know personally. Human ears without medical intervention you will not find a person with hearing above 22kHz but there are the exceptions. Yes building a harmonic tube like the ones you use to test silent dog whistles into a person head as part of medical process seams insane until you work out that tube means the deaf spot that is going to be in normal speaking frequencies will be covered by doing it so resulting in person not needing hearing aids most likely for their complete life. With medical intervention you will not find human ears better than 25kHz that is the true outlier value that not a person that is young its a person that had a particular medical operation on ears.

            Also we know from military tests that humans in brain and other body parts do detect frequencies higher than 22khz.

            Originally posted by Veto View Post
            This is the only one of your arguments that actually holds some water and the real reason why it can make sense to sample well above double the signal bandwidth. However, it seems many people overestimate their own ability to tell a difference between a well authored 44.1ksps and 192ksps signal in a blind-test..
            This comes down to well authored. Some of these tests do need triple bind to fully understand the picture. There are question if 192ksps vs 44.1ksps if the smoother pressure curves 192Ksps result in less ear wear. A lot of so called well authored are also not design to take advantage of the ultrasonic frequency ranges that we know the human brain can detect though the skull.

            Veto there are many so called double blind tests between 44.1ksps and 192ksps studies when you look closer are not really design to test if people can really tell the difference between 44.1ksps and 192ksps because the have the same frequency caps so the 192ksps is not allowed to use the extra frequencies that it allows to be generated. They are also working on personal option.

            One of the best I saw had a person do a hearing test measure response values list to a song and 44.1ksps/192ksps then do it again this is measuring ear wear as in how much effect listening to the song is having on you hearing. This is also how long you could listen to the music as still hear everything. If you asked the people if the song was any different the the no but the results were different in lots of those studies that the 192ksps was lower ear wear if you can get gear that and properly handle 192ksps. This difference is most likely more important to professional game players and sonar operators than those just listening to music.

            The ear wear stuff there is not enough studies there yet. Its the same with the ultrasonics were we don't have enough study to understand everything yet.

            The hard reality is a lot of the hard and fast rules we have been using with audio are wrong. The data exists to say without question they are wrong. Also you have to be aware that some studies have been based on the wrong ideas. Like testing if humans consciously tell the difference between 44.1ksps and 192ksps without in fact using audio that takes advantage of what 192ksps allows.

            If 44.1ksps and 192ksps audio files are restricted ear hearing ranges in the output audio you now need to be testing for ear wear not if the person can tell the difference because there is nothing the human brain can process differently there from the nerve signals. Yes research papers for development of cochlear implants tells us exactly what signals can leave ear to brain. Improved pressure wave is going to effect the hairs in the cochlear differently but that not going to result in a different signal to the brain other than how much ear sensitively is lost(ear wear) from the waveform.

            There are lot of so call studies by so called audiophiles that are a pure waste of paper because they did not understand ear mechanics or the other mechanics of how humans hear things so did the wrong tests. Humans don't just hear with our ears. Yes without understand the hearing mechanics its really to set up a test to produce total crap results. The horrible part is about these really crap results is people will hold them up and say hey there is no difference between 44.1ksps and 192ksps when there is quite bit of difference in fact if you do the right tests like looking at ear wear or looking at the ultrasound humans can in fact hear by skull.

            Please note the ultrasound by skull alter emotions responsiveness and stuff as in sub conscious stuff is is not a simple as ask a person what one was better. Yes having to do a proper controlled hearing test per person for the ear wear one is very time consuming. The proper tests require a lot of setup and work.

            Comment


            • #36
              Originally posted by Veto View Post
              No, it is not up to anyone to decide. You would have to be a dog or bat to hear some of these harmonics above ~20kHz.
              limit.
              I should have written 'up to you to decide if they matter TO YOU'. Again, I won't argue for or against whether humans can perceive such frequencies there are already enough people that do.
              My point is to take a 'Linux approach' and let everyone decide for themself if they care. Either way, Linux (as in software) should be able to reproduce such files without much hassle.

              Originally posted by Veto View Post
              This is the only one of your arguments that actually holds some water and the real reason why it can make sense to sample well above double the signal bandwidth. However, it seems many people overestimate their own ability to tell a difference between a well authored 44.1ksps and 192ksps signal in a blind-test...
              Let us assume that there is a hard limit of about 20kHz for human hearing and we cannot perceive anything higher (there is no such thing as a hard frequency limit, these limits are gradual). Then you'd still need a sampling frequency that is about a factor 5 higher to mitigate dampening and phase shifts introduced by filters.
              The 44.1kHz and 48kHz work only because humans don't hear well above 10kHz.

              Originally posted by Veto View Post
              Well, you do not seem to understand the Nyquist sampling theorem very well yourself it seems :-) Even a step response can be perfectly reproduced within the bandwidth limit.
              Are you talking about Nyquist or Fourier?
              By Fourier, the step has infinite harmonics so if you sample it, you cut off everything above half the sampling frequency by Nyquist (and need analog filters or end up with aliasing).
              In other words, the statement _within_the_bandwidth_limit_ transforms the step in a slope

              However, the point of this was actually something else. There is a timing and decay component to audio. This relates to location of audio sources and venue/reflections and humans are implicitly trained to detect these things...
              (Again not arguing that you need >20kHz but not dismissing it either)
              Last edited by mppix; 08 April 2021, 10:48 AM.

              Comment


              • #37
                Originally posted by mppix View Post
                My point is to take a 'Linux approach' and let everyone decide for themself if they care. Either way, Linux (as in software) should be able to reproduce such files without much hassle.
                Agreed!

                Originally posted by mppix View Post
                Let us assume that there is a hard limit of about 20kHz for human hearing and we cannot perceive anything higher (there is no such thing as a hard frequency limit, these limits are gradual). Then you'd still need a sampling frequency that is about a factor 5 higher to mitigate dampening and phase shifts introduced by filters.
                The 44.1kHz and 48kHz work only because humans don't hear well above 10kHz.
                Exactly! Humans don't hear well above 10kHz.

                Originally posted by mppix View Post
                Are you talking about Nyquist or Fourier?
                By Fourier, the step has infinite harmonics so if you sample it, you cut off everything above half the sampling frequency by Nyquist (and need analog filters or end up with aliasing).
                In other words, the statement _within_the_bandwidth_limit_ transforms the step in a slope
                Thanks Sherlock. Since we can't hear infinite harmonics anyway, that is kind of irrelevant. And when was the last time you actually heard or saw a mathematically perfect step response signal in real life??

                Originally posted by mppix View Post
                However, the point of this was actually something else. There is a timing and decay component to audio. This relates to location of audio sources and venue/reflections and humans are implicitly trained to detect these things...
                True, but frequencies above ~16kHz are not really necessary/relevant for spatial location or reflections. (Remember: Humans don't hear well above 10kHz...)

                Comment


                • #38
                  Switched from PulseAudio to PipeWire on my Gentoo machines. Worked fine with OpenRC, Initialized bluetooth correctly, and even allows me to communicate with JACK. My computers are now Poetterware-free, and it feels pretty great.

                  Comment


                  • #39
                    Originally posted by mppix View Post
                    The 44.1kHz and 48kHz work only because humans don't hear well above 10kHz.
                    Originally posted by Veto View Post
                    Exactly! Humans don't hear well above 10kHz.
                    Both of you have wrote something that is not true. This is true for the human ear


                    This here is a paper from 1991. Turns out we can in fact make out speech in the ultrasonic range by bone as in your skull not your ear. Yes a person can hear and understand speech by ultrasound to bone with no inner-ear at all.

                    The reality is humans can hear quite well in particular ultrasonic ranges. Our vocal cords cannot produce those ranges. Some musical instruments do that are generally not used in recording because they don't sound right. Its kind of in face why when you work out most microphones are cutting out key tones of the instrument and the play back speakers/headphones are unable to generate the tones.

                    44.1kHz in from the 1970s when cd audio standard was created 48kHz is from the 1980s because some of the really early studies started showing problems with 44.1kHz not covering enough and this before we really had anywhere near correct data on what people really could and could not hear. We are still learning in fact.

                    The 10kHz is like the 20Khz limit both are myths if you say its general human hearing. 10kHz and 20kHz stuff only makes sense when you are referring to sound humans hear by the inner-ear . But humans don't just hear by the ears. You have bone hearing that is ultrasonic hearing and have internal fleshy organ hearing infrasonic hearing(sub woofer stuff). There also turns out to be some brain hearing that is brain flesh that is ultrasonic hearing. There are the 4 locations of human hearing we know about
                    1) Normal audio inner-ear
                    2) infrasonic flesh.
                    3) ultrasonic bone.
                    4) ultrasonic brain tissue.
                    That the ones we know about. Yes in the 1970 we only knew about type 1. 1980s early we knew about 1 and 2. Early 1990s we found out about 3. 2015 we found out about 4.

                    The reality is the crap you two are saying is correct for 1970s level of hearing knowledge. We are 2021 lets try to be a little current.

                    Please note that list of 4 we don't know if that is complete list. the ultrasonic bone and brain tissue hearing we don't in fact know the complete ranges of that yet.

                    Yes ultrasonic bone will in fact work on skull and forearms. Why no other bones we have no clue its normally answer with its appears the human body is wired that way for some reason and it was finding the forearms bit that shows that ultrasonic bone was not using the inner-ear without question. The reality we have lots of unable to be answered questions at this stage once you get to the human hearing that its not restricted to ears.

                    Like or not its not a closed and finalised question on what humans can hear.

                    Comment


                    • #40
                      Originally posted by oiaohm View Post
                      Like or not its not a closed and finalised question on what humans can hear.
                      I find it odd that people are so obsessed with whether audio recording and playback is good enough, yet people don't seem to fight as much over whether a fixed set of RGB (or even RGBY) channels in a monitor is good enough when it's entirely possible that your cone cells are calibrated to have frequency response curves centred slightly differently from mine.

                      Comment

                      Working...
                      X