Announcement

Collapse
No announcement yet.

PulseAudio 15 Lands mSBC Codec Support To Enable Bluetooth Wideband Speech

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by ssokolow View Post
    I find it odd that people are so obsessed with whether audio recording and playback is good enough, yet people don't seem to fight as much over whether a fixed set of RGB (or even RGBY) channels in a monitor is good enough when it's entirely possible that your cone cells are calibrated to have frequency response curves centred slightly differently from mine.
    Sorry to say you just not in the right areas. HDR with monitors also has a lot of the same arguments. Vision itself is way more studied than Audio.
    https://en.wikipedia.org/wiki/High-d...ng#Photography

    The table on that wikipedia page with information about the human eye we have very detailed information. You take take a person basic body dimensions work out a max visibility of pixels at different distances. The colour range humans can see we know all the DNA variations to generation a max range.

    Yes your eyes my eyes may have different calibrated frequency response but you can draw a area map that called max human vision and be sure that every single human is inside that. Problem with human hearing bad if you were asking for what is hear by the ears/inner ear you could draw a area map like this. Once you talking about total human hearing include just infrasonic that humans can hear without the ultrasonics you no longer have the data to draw that area map exactly any more. This is the first start of the problem.

    Next is a thing I call ear wear. As long you listen to the same frequency the less you can come able to hear it this is a related mechanic to industrial deafness. Yes the shape of the frequency waveform does effect the speed of this. This lead to a common problem were a person will say like I find a 96kHz audio out is better 48kHz. The people get a smart ass idea of doing a blind test and the person cannot tell the difference between 96kHz and 48kHz so then argue that there is no difference because person cannot tell the difference and it because they testing the wrong thing.

    Reality person who end up preferring 96kHz could have been listening to music hours at a time. Better waveform less ear wear and more likely that the signal from ear to brain will be the same at first song as the last song few hours latter due to the lower ear wear issue.

    You vision is not like this generally if it was it would be insanely wacky and possible dangerous just think walking around like magically not being able to see red because you have seen too much red for the day and that will then come back when your rest your eyes. Yes Audio in the inner-ear is really like that you have listening to a too harsh of X frequency the hairs in you cochlear for that frequency get stuck and you can no longer hear that frequency until they reset that could require you to sleep before they come back and if it been done too badly this is forever hearing loss.

    The mechanics of the inner-ear are a true pain in the ass. Please note the mechanics for the infra-sound and ultrasound areas of human hearing are equally as bad where depend on when and how much you test a person you can get very different results let along what sound they have been exposed to differences.

    We really do need to admit how annoying complex hearing is and how little we really do understand it. We know way more about vision and vision is no where near the complexity hearing is.

    Subconscious brain messes with visions(like you don't see the blind spots in your eyes) a bit but its no where near the amount it messes with hearing. Horrible what you think you hear can be altered by what you are currently seeing or smelling. Then you have the mechanical side of hearing that is no where near as reliable.

    Hearing in humans being implemented on what you could call a non reliable mechanics that for everyone is at least some point though every day of their life has had multi-able failures this is even before being born from as soon as the inner ear is form and not notice because they have been hidden subconscious brain like vision blind spots so doing a stack of creative to cover up for failures. Add in personal bias and jumping to conclusions without proper research and study. This explains why Audio is a very much bigger arguement than monitors.

    First thing everyone need to admit start line that hearing design in humans is a absolute mess. Lot of what people have been told in textbooks and so on about hearing is out of date garbage.

    Comment


    • #42
      Originally posted by oiaohm View Post
      Sorry to say you just not in the right areas. HDR with monitors also has a lot of the same arguments. Vision itself is way more studied than Audio.
      That's interesting, and I'm honest in saying "thank you for sharing that"... but I was making an observation about people's behaviour when the topic comes up.

      Comment


      • #43
        Originally posted by ssokolow View Post
        That's interesting, and I'm honest in saying "thank you for sharing that"... but I was making an observation about people's behaviour when the topic comes up.
        If you want a arguement in monitors that is almost as bad as some areas hearing its the how fast of the refresh rate should you have. Its another area where we don't have complete data on how fast a human can really process at the edges. . More we don't know the worst the arguments and the more people are making arguments not in fact based in fact.. Audio is a area where lots of people think we know a lot more about it than we do. And worse hearing is a lot of old stuff in books is horrible wrong.

        Most of the bad behaviour has a direct cause of how us as humans respond to lack of knowledge.

        Remember while back when we had people saying 60Hz was past what humans could see so was good enough then people started following the studies and only recently have people started to come aware that for particular games 250Hz and + does truly provide advantage to speed of human response. Remember this is another case we cannot consciously really see past 25Hz but as humans we can reflex respond subconsciously faster than that and the subconscious fulling in gaps does get things wrong like wheels turning the wrong way.

        The reality us as Human have a horrible problem of liking absolutes and pushing absolutes that have no base in study or research instead of the correct answer that is not fully studied and we don't really fully know.

        Hearing we don't know a hell of a lot once you get outside inner ear hearing.

        ssokolow there is a funny thing here there are a lot of parallels between humans with religion and humans with areas that we are under researched or poorly educated in. Yes areas we are under researched you basically get the unformed equal to a religiousness zealot worst part is most of them don't know they are. More we are under researched in the area greater the zealot count.

        Comment


        • #44
          Originally posted by oiaohm View Post
          Both of you have wrote something that is not true. This is true for the human ear
          ...
          Yes ultrasonic bone will in fact work on skull and forearms. Why no other bones we have no clue its normally answer with its appears the human body is wired that way for some reason and it was finding the forearms bit that shows that ultrasonic bone was not using the inner-ear without question. The reality we have lots of unable to be answered questions at this stage once you get to the human hearing that its not restricted to ears.

          Like or not its not a closed and finalised question on what humans can hear.
          This is all very interesting and such, especially if making hearing aids and implants for hearing impaired. But I guess we should limit this discussion to normal sound transmitted by air and reproduced by speakers...

          People wanting extended frequency ranges really ought to play with a tone generator once in a while...

          Comment


          • #45
            Originally posted by Veto View Post
            This is all very interesting and such, especially if making hearing aids and implants for hearing impaired. But I guess we should limit this discussion to normal sound transmitted by air and reproduced by speakers...
            https://www.cco.caltech.edu/~boyk/spectra/spectra.htm
            There are many musical instruments that transmitted by air go into well ultrasound range yes up over 100KHz at a db that in bone or brain ultrasonic hear able. Remember ultrasonic starts at 20Khz and go up. So 4 over times the audio space of normal hearing not all area we as human can hear some of it we can yet it has to be searched to studied to know what areas up there are important.

            This brings some really true interesting problem we have not been designing speakers and recording systems to properly record audio at performances. This is partly why a personal performance without amplification sounds way different to the same performance with amplification because the ultrasonics are not being reproduced.

            The 192Khz sample rate is likely short of what is in fact required to get a proper recording covering everything humans can in fact hear by one means or another.

            https://en.wikipedia.org/wiki/File:H...tion_audio.svg
            Yes 192Khz appears on DVD audio in year 2000. That was made part of DVD audio based on the early research at that time. So 192Khz is a playback you should wish to be able todo with up to the 96Khz generation we know this is short.

            Yes lot of places still put the incorrect 20Khz as the top range of human hearing even that research says otherwise. There are 4 bands of human hearing we know of.
            Last edited by oiaohm; 09 April 2021, 04:30 PM.

            Comment


            • #46
              Originally posted by Veto View Post
              Thanks Sherlock. Since we can't hear infinite harmonics anyway, that is kind of irrelevant. And when was the last time you actually heard or saw a mathematically perfect step response signal in real life??

              True, but frequencies above ~16kHz are not really necessary/relevant for spatial location or reflections. (Remember: Humans don't hear well above 10kHz...)
              Alright Watson, we need to talk
              First, there is no such thing as an ideal step response. However, any commodity frequency generator will create step responses that contain frequencies well into the hundreds of GHz.

              Anyhow, you are not wrong. 'High-fidelity' (as in true to the master recording) reproduction above 10kHz or 12kHz is not really needed for laptop speakers or for headphones plugged into the laptop 3.5mm sockets. The SNR, THD, ... of these components is too high for such things to matter.

              However, I happen to have a pretty decent 2channel system and I can say that it is possible to distinguish DAC filters when playing 44.1/48kHz audio (yes, I blind tested for fun with a friend and he could detect different filters as well). At 88.2/96kHz it was somewhat a guessing game and we were not able to tell anything at 176.4/192kHz.
              Does this mean that we can hear 192kHz? Of course not. However, filtering at 44.1/48kHz can definitely bleed into the audible range.

              For me, it is not even about 'clarity' but about 'listening fatigue'. I simply can listen to high res audio longer and enjoy it*. 44.1/48kHz and a sharp filter definitely puts a timer on my enjoiyment, 44.1/48kHz and a slow filter makes things duller than needed. I can live with either if I have to - but I don't think that I have to

              Last, I would like to point out that I don't think that we can double blind test everything - yes, i am still a big fan of the scientific method. However, it is also known that our brain perceives things without our awareness when below a threshhold. Have a look at subliminal perception/advertisement.

              Let me close with the thought that audio quality decreased from vinyl to casettes/walkman to mp3 players to an all-time low with youtube. Music is ommnipresent but worse then ever. Yes CD is better than all of them but its sucessors - SACD, DVD-A, Blueray audio were massive failures. Our fathers had (much) better stereos in the 60s-80s than their sons and daughters in the 00-20s and we are at the point where for many people, the benchmark for good audio are Beats pro and Bose noice canceling headphones...
              I'd advocate to increase audio quality for a change.

              edit: here is a slightly different take:
              https://www.audiosciencereview.com/f...-it-matter.11/



              * Assuming well recorded music that is not purely mixed for average car speakers, boomboxes, and beats pro (of key important for artists to be successful). However, combining heavy dynamic compression (loudness war etc..) with quite reasonable (lossy) file compression file formats and yes, there may be no frequency left above 12khz...
              Last edited by mppix; 10 April 2021, 10:57 PM.

              Comment


              • #47
                ssokolow and oiaohm it depends what you are after. Engineering for optima often involves finding points of diminishing returns and discard information byond that. In audio, the typical example would be lossy audio codecs that discard audio information that is difficult to hear. Also, ultra-compact (as in laptops and portable electronics) speakers are designed to cover just the essential frequency bands.

                Alternatively, one can make a system 'good enough' making it 5-10 times better than the (reasonable) limit. In audio, this would be sometihng like 192kHz+. Ideally, you'd make each sample 24 (or 32bits) to permit for digital volume control without loosing the about 90-110dB (audible) dynamics at the output.

                Comment


                • #48
                  Originally posted by mppix View Post
                  However, I happen to have a pretty decent 2channel system and I can say that it is possible to distinguish DAC filters when playing 44.1/48kHz audio (yes, I blind tested for fun with a friend and he could detect different filters as well). At 88.2/96kHz it was somewhat a guessing game and we were not able to tell anything at 176.4/192kHz.
                  Does this mean that we can hear 192kHz? Of course not. However, filtering at 44.1/48kHz can definitely bleed into the audible range.

                  For me, it is not even about 'clarity' but about 'listening fatigue'. I simply can listen to high res audio longer and enjoy it*. 44.1/48kHz and a sharp filter definitely puts a timer on my enjoiyment, 44.1/48kHz and a slow filter makes things duller than needed. I can live with either if I have to - but I don't think that I have to
                  https://www.aes.org/e-lib/browse.cfm?elib=20455

                  There is 2019 study. You intentionally trained to detect listening fatigue/ear wear with the right time frame of audio 98% of the time you can tell the difference between 88.2/96kHz and 176.4/192kHz. But it is really easy to set the test up wrong.

                  Originally posted by mppix View Post
                  https://www.audiosciencereview.com/f...-it-matter.11/
                  * Assuming well recorded music that is not purely mixed for average car speakers, boomboxes, and beats pro (of key important for artists to be successful). However, combining heavy dynamic compression (loudness war etc..) with quite reasonable (lossy) file compression file formats and yes, there may be no frequency left above 12khz...
                  This is not 100% wrong but the 2019 study bring a interest curve ball. Yes you may have a lossy file with no frequency left above 12kHz and if you up-convert it correctly to 24bit 192kHz a person will be able to listen to it for longer. There is a ear mechanics problem here.

                  Originally posted by mppix View Post
                  ssokolow and oiaohm it depends what you are after. Engineering for optima often involves finding points of diminishing returns and discard information byond that. In audio, the typical example would be lossy audio codecs that discard audio information that is difficult to hear. Also, ultra-compact (as in laptops and portable electronics) speakers are designed to cover just the essential frequency bands.

                  Alternatively, one can make a system 'good enough' making it 5-10 times better than the (reasonable) limit. In audio, this would be sometihng like 192kHz+. Ideally, you'd make each sample 24 (or 32bits) to permit for digital volume control without loosing the about 90-110dB (audible) dynamics at the output.
                  Its a bit catch humans can hear above 20Khz. Our ears are mechanically design for smooth waveforms than 44.1/48 or 88.2/96kHz can generate and yes possible even176.4/192kHz. So even if you are generating sub 20Khz for humans to listen to you are needing a lot more frequency range.

                  This is like this mistake with cd players could get by with a 8bit DAC instead of a 12bit DAC because in short audio play human does not notice the difference yes the difference here is speed of ear wear as well. The reality is what is classed as essential frequency bands and required DAC bit level have been increasing over time as we have understood more.

                  Problem here thinking some of the studies I am up on are from 2019 there has not been enough time for consume laptops and portable electronics speakers to catch up with it.

                  There are two reason why you need 196Khz 24 bit audio or better on audio output.
                  1) humans absolute can hear past 20khz using other parts of the body than the inner ear.
                  2) The inner ear for hearing under 20khz turns out to be lot more picky on the waveform than people have presumed and we now have the studies that prove it.

                  There has been a increased rate of hearing loss in the general public with the usage of 44.1 and 48Khz output. At first this was just presume because people were listening too much audio at too high of volume but that only half the picture. With a better waveform you can listen to a higher volume before getting inner ear damage that brings a very interesting curve ball. Yes its the mechanics of the ear hairs in the cochlear they like to be more gently pushed around. Think carefully a better shaped curve on the waveform if you are measuring total exposed energy the high max volume on the better waveform where it starts doing ear damage is hitting around same total exposed energy as the low volume poorer waveform.

                  Yes louder output with less input power as well great for device battery life as long as you are not losing too much in processing by also going up in frequency and increasing DAC bits .

                  So the realities of the studies say over the next decade consumer electronics speaker systems need to move up to support 192Khz 24bit or better. Yes those making music may stay at 12Khz or less being upsampled into 192Khz 24bit or better waveforms on output from device. As 192Khz 24bit or better come more common so will option for composers to go after the ultrasonic bits of human hearing with larger market access.

                  The hard reality here is if consumer electronics don't improve in a decade or two they could be getting sued for the long term damage to ears in the sub 20Khz range their choice has caused. Its one thing to make a ear damaging device when we don't have the knowledge to know about it but its a completely different matter once we do know about it.

                  Why does pipewire and pulseaudio need to support 196kHz 24bit or better going forwards is like it or not that what the output on future devices will have to come.

                  Comment


                  • #49
                    Originally posted by oiaohm View Post
                    This is not 100% wrong but the 2019 study bring a interest curve ball. Yes you may have a lossy file with no frequency left above 12kHz and if you up-convert it correctly to 24bit 192kHz a person will be able to listen to it for longer. There is a ear mechanics problem here.
                    This has nothing to do with a system capable of 24bit 192kHz playback. An upconverted file from 16bit 44.1kHz (or sampled from analog master) does not contain ANY information that uses the higher bitdepth and frequency (95% or even 99% of high-res albums are made this way). Playing such a file is equivalent to having an oversampling DAC that resamples a 44.1kHz file. The analog output may only differ due to filtering artefacts due to the upres/filter algorithm.

                    Originally posted by oiaohm View Post
                    This is like this mistake with cd players could get by with a 8bit DAC instead of a 12bit DAC because in short audio play human does not notice the difference yes the difference here is speed of ear wear as well. The reality is what is classed as essential frequency bands and required DAC bit level have been increasing over time as we have understood more.
                    Human ears can perceive about 140dB dynamic range but not at the same time. 100-110dB is as good as we can do - you can try to listen to your foodsteps in the grass (~30dB) while mowing it (~80dB). Anyhow, this puts needed bitdepth slightly above 16bit (8 or 12 bit is inadequate). Higher bitdepths are (very) useful for digital volume control. Then, you can reduce the digital signal to the lower 16bits without loosing resolution.

                    Originally posted by oiaohm View Post
                    There are two reason why you need 196Khz 24 bit audio or better on audio output.
                    1) humans absolute can hear past 20khz using other parts of the body than the inner ear.
                    2) The inner ear for hearing under 20khz turns out to be lot more picky on the waveform than people have presumed and we now have the studies that prove it.
                    To my knowledge this cannot be proven byond a doubt. Sound is a variation is air (or the medium we happen to be in) pressure. Yes, higher frequencies can have an effect on the human body and they can even be weaponized. However, I don't think it is clear whether it is part of what we call hearing..

                    Originally posted by oiaohm View Post
                    There has been a increased rate of hearing loss in the general public with the usage of 44.1 and 48Khz output. At first this was just presume because people were listening too much audio at too high of volume but that only half the picture. With a better waveform you can listen to a higher volume before getting inner ear damage that brings a very interesting curve ball. Yes its the mechanics of the ear hairs in the cochlear they like to be more gently pushed around. Think carefully a better shaped curve on the waveform if you are measuring total exposed energy the high max volume on the better waveform where it starts doing ear damage is hitting around same total exposed energy as the low volume poorer waveform.
                    Primary cause for hearing loss is volume not format! Most music just happenes to be in 44.1/48kHz (and I include all the upsampled music here that is sold as high res). Also, digital music is not rendered in steps if that is what you are suggesting. There is an array of possible output filter techniques and we are talking of a pretty decent hifi system if it has amplifiers/speakers/headphones that can render things significantly byond 20kHz.

                    Originally posted by oiaohm View Post
                    So the realities of the studies say over the next decade consumer electronics speaker systems need to move up to support 192Khz 24bit or better. Yes those making music may stay at 12Khz or less being upsampled into 192Khz 24bit or better waveforms on output from device. As 192Khz 24bit or better come more common so will option for composers to go after the ultrasonic bits of human hearing with larger market access.

                    The hard reality here is if consumer electronics don't improve in a decade or two they could be getting sued for the long term damage to ears in the sub 20Khz range their choice has caused. Its one thing to make a ear damaging device when we don't have the knowledge to know about it but its a completely different matter once we do know about it.
                    Doubtful. From a big picture perspective, I'd be happy if music delivery (streaming) gets to lossless CD quality. Most music is still delivered in lossy formats with youtube being particularly bad.
                    For hearing loss, I also doubt that you can hold electronic companies accountable. Primary cause is still volume and not blasting music over the subway but to use noise-cancelling headphones to hear music at lower volume is way more important.
                    Note that I don't disagree that we need less volume to enjoy music at better quality. However we are talking a few dB of difference.

                    Originally posted by oiaohm View Post
                    Why does pipewire and pulseaudio need to support 196kHz 24bit or better going forwards is like it or not that what the output on future devices will have to come.
                    Pipewire will have to support it primarily for studio/jack API use and people like me (and you) will benefit. I have no idea about PA.
                    It will be years if not decades before we will see general adoption of higher resolution for radio and streaming because (a) streaming bandwidth is still a significant cost for the service provider and few people would pay extra for it, (b) most people don't have the audio systems to benefit from it, and (c) 99% (or 95%) of the music was recorded in standard definition analog/digital and cannot benefit from it.
                    Just universally getting to lossless standard definition would be a big step.

                    Last edited by mppix; 11 April 2021, 11:18 AM.

                    Comment


                    • #50
                      Originally posted by mppix View Post
                      This has nothing to do with a system capable of 24bit 192kHz playback. An upconverted file from 16bit 44.1kHz (or sampled from analog master) does not contain ANY information that uses the higher bitdepth and frequency (95% or even 99% of high-res albums are made this way). Playing such a file is equivalent to having an oversampling DAC that resamples a 44.1kHz file. The analog output may only differ due to filtering artefacts due to the upres/filter algorithm.
                      How that up-scaling is done is under review because of the problems found.

                      Originally posted by mppix View Post
                      Human ears can perceive about 140dB dynamic range but not at the same time. 100-110dB is as good as we can do - you can try to listen to your foodsteps in the grass (~30dB) while mowing it (~80dB). Anyhow, this puts needed bitdepth slightly above 16bit (8 or 12 bit is inadequate). Higher bitdepths are (very) useful for digital volume control. Then, you can reduce the digital signal to the lower 16bits without loosing resolution.
                      As soon as you said perceive everything that point it wrong for the mechanical parts of the ear. Effectively what you have said is like saying a person can travel a 100 km per hour and wonder why they die when they stop in a split second.

                      The reality is you are needing 24 bit or greater to generate waveforms that suite the mechanical parts of the ear.

                      Originally posted by mppix View Post
                      For hearing loss, I also doubt that you can hold electronic companies accountable. Primary cause is still volume
                      This is where you are straight up wrong. Primary cause is a combination of volume and waveform. The more not natural the waveform the lower the volume required to cause ear damage. Our ears were develop with natural sounds not digital generated sounds. A square wave tone generator that tone is more ear damaging than a sine wave generator at the same volume at the same frequency.

                      So the shape of the generate waveform is very important. The shape details to protect the mechanical parts of the ear are outside what we can directly perceive.

                      Lot of audio studies have been done on human perception not on human ear mechanism. Work place health/OHS... what very country calls it for device certification cares about the mechanism not the perception and its going be that section that forces device makers to update.

                      Originally posted by mppix View Post
                      Note that I don't disagree that we need less volume to enjoy music at better quality. However we are talking a few dB of difference.
                      This is wrong. You can enjoy better quality music/generation at higher volume without doing ear damage. So more DB range to over come background noise. The scary point is not a small amount of difference. The difference between a poor waveform and a good waveform db level before doing ear damage is over 20db.

                      These differences are starting to explain why people have industrial deafness working in workplaces where nothing generates DB above past exceptable limits. Yes something in the workplace generating a nicely squarish wave causing rapid hair in ear acceleration and de-acceleration.

                      Comment

                      Working...
                      X