Announcement

Collapse
No announcement yet.

New Sound Drivers Coming In Linux 4.16 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    This whole 'I can hear the difference between 24-bit and 32-bit' has been debunked by Monty (Xiph) a long time ago.

    32-bit is only useful for studio audio because every time you layer an audio effect over another, it introduces a little more noise. At 32-bit, the additional noise is so incredibly low that there is virtually no noise gain after layering hundreds of effects.

    192kHz is also only really useful for studios, and I believe the only ones who actually can take advantage of that sample rate are the recording orchestras. The most a person needs is 48kHz because that leaves enough accuracy for the highest frequencies (~20kHz) possible for a human to hear.

    THD and SNR are probably the most important things that can be noticed from the modern DAC, and even then, we don't need that much. The brain fills in missing information all the time.

    The idiot who is talking about human evolution is living in a fantasy world. When the most you could get out of a recording was 6-bits, of course they would be fine with it because they simply have not heard anything with a higher resolution before. Now we are at the point where you can get impossibly accurate reproduction despite massive compression ratios. Humans can recognize patterns, so when patterns of encoders artifacting were audible, it would be possible to tell when it was a compressed song or an uncompressed song. The same thing is true for new games vs old games. Now that we see the higher clarity, it's hard to go back to where there was much less. Those terrible encoders were more than a decade ago, and today, the weakest links are the actual physical components that reproduce sound.

    I have a song that I always thought was poorly mastered because it distorted at the louder areas when a lot of different sounds were going on. It was the same problem in my car, it sounded distorted, regardless of the audio levels coming out of my phone. One day I got a pair of Fostex T50RPs and I noticed a lot of details in the songs that otherwise weren't there. Eventually I came across that (previously) distorted song, but this time, it sounded clear and powerful. Physically, the traditonal (and cheap) speaker cone materials can not retain their shape and rigidity to effectively reproduce sounds that change faster than the materials can move.

    That being said, my biggest gripe with these headphones is the lack of bass, so I got 5.0 speakers for about the same price as those headphones and used a $40 amp to power the speakers until I could get a proper home theater audio receiver.

    Originally posted by caligula View Post
    The problem with true HiFi sound chips is that the content is sucky. Most audio is just 48 kHz 24 bit studio material. It's too weak for HiFi experts who expect 32 bits and 192 kHz (nowadays 384 kHz). Also most records are stereo, which is quite weak compared to 7.1.
    1) I should have you know that I do in fact, have 24-bit 96kHz WAV music, and I cannot tell the difference between it and when it is re-encoded to 16-bit or 24-bit 48kHz FLAC.

    2) The stereo recording of audio is all you need as long as it has been properly mixed. The problem with listening to 7.1 sounds with a pair of headphones that can do 2.0 stereo is that you have to have a really good matrix or surround sound mixer in software. Even a home theatre with 7.1 surround sound isn't perfect because there's no above or below sounds, and if you move your head around, it often falls out of alignment with the other speakers.
    Last edited by profoundWHALE; 20 January 2018, 02:06 PM.

    Comment


    • #22
      Originally posted by aht0 View Post

      Human ear is killed off early-on these days. Majority does not realize that noise damage is a permanent damage.

      My left ear is -35dB 3...5kHz range, from shooting without ear mufflers, when I was younger. By the time issue was discovered in a routine check, it was already late.

      Now I literally shudder each time I happen to see some dude or gal listening music so loud through their headsets that I can actually clearly hear it from 2 meters away. Happens multiple times a day, usually it's people from pre-teens to 20+. Or idiot car-fags from 18 to 40 with seemingly 1kW bass in their cars booming through half the city.. Deaf as stone wall before hitting 50.
      thanks god I never was shooting and I can hear when an audio output amplifier has a wired frequency response ;-)

      Comment


      • #23
        Originally posted by ids1024 View Post

        Nice video. It looks like I submitted my first kernel patch to the sound subsystem just in time for it to end up in this pull request.

        https://iandouglasscott.com/2018/01/...nd-blaster-e1/
        https://git.kernel.org/pub/scm/linux...c1e0e42ba35619
        You may also find my just released initial Sgi Octane MIPS64 Linux overview interesting: https://www.youtube.com/watch?v=AU_RV8uoTIo
        Last edited by rene; 20 January 2018, 05:25 PM.

        Comment


        • #24
          Originally posted by starshipeleven View Post
          evolution is just an effect of natural selection. Natural selection happens when individuals with "bad" traits that can't reproduce because of some reason (die sooner, disliked by the opposite sex, whatever).
          So unless people with worse hearing will somehow manage to make more children than those with normal hearing, this won't happen.
          Sure, but nothing guarantees that a trait stays around forever if it isn't anymore necessary for survival of a species. It coud just get mutated away over time, maybe in benefit of some other traits.

          Comment


          • #25
            For those that wonder what can change, the DMA interfacing and control of engine. It can have a dsp or not. The codec might have slight changes. The mixer/audiopath might change.
            To be clear: The audio interfacing have become more and more retarded since intel introduced the HDA.
            Before that on at least PC's multiple channels were mixed by hardware. These days, the CPU needs to mix everything by software and trash cache, just because the HDMI out does not have a pcm channel mixer in front of it.
            As on amplifier side and such, I had no problem attaching an I2S codec with integrated class D amplifier to a generic i2s dma engine.

            Comment


            • #26
              Originally posted by starshipeleven View Post
              Then what do they represent? Are they just a lie or are they some theoretical max?
              They represent best effort. I.e they have real 32-bits of real decoders inside but there is just no way for them to create a true 32-bit output. Line level which are used for analogue sound transmission between equipment have a total range of 3.472 (for professional equipment! consumer are slightly less) which means that the one bit difference that this chip have to create are 1nVolt (0.000000001), the noise levels alone should be greater than this for any integrated circuit.

              Which of course also clearly displays why it's mostly a numbers game since you cannot hear a difference that is that small. In fact the 16-bit for CD:s where set by Philips based on in-house studies to figure out the needed dynamic range and then they added headroom to that.

              So there does not exist any one that can hear the difference between a 16-bit or 24-bit sample if both are created correctly. The only reason that studios use 24-bit and above is to enable mixing and so that they do not loose useful resolution when they are compressing the dynamic range.

              The same is true for the frequency, 44K.1Khz as is used by CDs can truly represent 100% every sound in the 0-22Khz range however this also means that if you accidentally record sound that is > 22Khz then this will create digital artefacts that will look like lower frequency material in your digital data so you must have a filter. Filtering at 22Khz is not easy since filters (even brick filters) are not infinitely steep so it's much better to record at 96Khz and apply a normal filter somewhere between 22Khz and 48Khz and then apply a digital filter when you downsample to CD quality.

              The only times that people have heard differences are when they have either not done a proper ABX test (since the golden ears are so sensitive that they must know which equipment is playing in order to hear the difference...) or when the tested equipment are deliberately altering the sound (not uncommon for very expensive HIFI cables to include electronics that alter the sound aka equalizer so that people can claim to hear a difference).

              Comment


              • #27
                Originally posted by caligula View Post

                So you think this is fake news?


                "This first member of the ESS PRO SABRE series sets a new benchmark in high-end audio by offering the industry's highest dynamic range (DNR) of 140dB. The ES9038PRO also offers impressively low total harmonic distortion plus noise (THD+N) at -122dB in a 32-bit, 8-channel DAC."
                It's a press release and by definition fake news ;-). And if you really think that there exists a low cost chip that can truly create a analogue out signal with a 0.000000001 Volt difference then I have a bridge to sell you.

                0xFFFFFFFF would have to yield an exact output of 1.736000000 Volts and 0xFFFFFFFE would have to yield an exact output of 1.735999999 from this chip in order for it to be a real 32-bit DAC.

                Comment


                • #28
                  Originally posted by F.Ultra View Post

                  It's a press release and by definition fake news ;-). And if you really think that there exists a low cost chip that can truly create a analogue out signal with a 0.000000001 Volt difference then I have a bridge to sell you.

                  0xFFFFFFFF would have to yield an exact output of 1.736000000 Volts and 0xFFFFFFFE would have to yield an exact output of 1.735999999 from this chip in order for it to be a real 32-bit DAC.
                  You're probably right. The actual real world performance might not be good. But the DACs probably still have actual slots for 32 bits. I'm also guessing that in few years the phones have 64 bit DACs. and 16k graphics with 5" screens.

                  Comment


                  • #29
                    Originally posted by caligula View Post

                    You're probably right. The actual real world performance might not be good. But the DACs probably still have actual slots for 32 bits. I'm also guessing that in few years the phones have 64 bit DACs. and 16k graphics with 5" screens.
                    They might and that too would be completely useless. Already with 24-bit DACs you have a dynamic range of 144.49dB which is well above the limit to give you immediate and permanent hearing damage so no music will ever utilize even a fraction of that range. Even the 16-bit DACs which gives 96.33dB dynamic range is capable of truly reproducing sounds loud enough to cause hearing damages so even this is actually overkill for a DAC. A 24-bit or 32-bit ADC is a whole different story but that is not what we are discussing here.

                    Comment


                    • #30
                      Originally posted by F.Ultra View Post

                      They might and that too would be completely useless. Already with 24-bit DACs you have a dynamic range of 144.49dB which is well above the limit to give you immediate and permanent hearing damage so no music will ever utilize even a fraction of that range. Even the 16-bit DACs which gives 96.33dB dynamic range is capable of truly reproducing sounds loud enough to cause hearing damages so even this is actually overkill for a DAC. A 24-bit or 32-bit ADC is a whole different story but that is not what we are discussing here.
                      For some reason people were happy with CD audio quality (or even lower) for so long. The mp3 rips were often 22 kHz in late 1990s. 48 kHz sound cards became more common around 2000 with SB Live and Intel HD Audio. After that things have changed a lot. First 48, then 96, then 192. Now 384 kHz is standard in high end audio world. DAC bit counts have also steadily increased. From 8 to 16 to 24 to 32 bits. High end gear has 64 bits or more. Same thing with video cards. 8 bits per channel was good enough for quite long. 24b true color was standardized in 1994. Now for the first time normal consumers want 30b and latest standards even support up to 48 bits (HDMI, Displayport). It's really surprising how 30b wasn't enough. from 16,7 million colors to 64 x 16,7 million. Instead we need 262144 times more. Maybe up to 64 bits in the future.

                      Comment

                      Working...
                      X