This whole 'I can hear the difference between 24-bit and 32-bit' has been debunked by Monty (Xiph) a long time ago.
32-bit is only useful for studio audio because every time you layer an audio effect over another, it introduces a little more noise. At 32-bit, the additional noise is so incredibly low that there is virtually no noise gain after layering hundreds of effects.
192kHz is also only really useful for studios, and I believe the only ones who actually can take advantage of that sample rate are the recording orchestras. The most a person needs is 48kHz because that leaves enough accuracy for the highest frequencies (~20kHz) possible for a human to hear.
THD and SNR are probably the most important things that can be noticed from the modern DAC, and even then, we don't need that much. The brain fills in missing information all the time.
The idiot who is talking about human evolution is living in a fantasy world. When the most you could get out of a recording was 6-bits, of course they would be fine with it because they simply have not heard anything with a higher resolution before. Now we are at the point where you can get impossibly accurate reproduction despite massive compression ratios. Humans can recognize patterns, so when patterns of encoders artifacting were audible, it would be possible to tell when it was a compressed song or an uncompressed song. The same thing is true for new games vs old games. Now that we see the higher clarity, it's hard to go back to where there was much less. Those terrible encoders were more than a decade ago, and today, the weakest links are the actual physical components that reproduce sound.
I have a song that I always thought was poorly mastered because it distorted at the louder areas when a lot of different sounds were going on. It was the same problem in my car, it sounded distorted, regardless of the audio levels coming out of my phone. One day I got a pair of Fostex T50RPs and I noticed a lot of details in the songs that otherwise weren't there. Eventually I came across that (previously) distorted song, but this time, it sounded clear and powerful. Physically, the traditonal (and cheap) speaker cone materials can not retain their shape and rigidity to effectively reproduce sounds that change faster than the materials can move.
That being said, my biggest gripe with these headphones is the lack of bass, so I got 5.0 speakers for about the same price as those headphones and used a $40 amp to power the speakers until I could get a proper home theater audio receiver.
1) I should have you know that I do in fact, have 24-bit 96kHz WAV music, and I cannot tell the difference between it and when it is re-encoded to 16-bit or 24-bit 48kHz FLAC.
2) The stereo recording of audio is all you need as long as it has been properly mixed. The problem with listening to 7.1 sounds with a pair of headphones that can do 2.0 stereo is that you have to have a really good matrix or surround sound mixer in software. Even a home theatre with 7.1 surround sound isn't perfect because there's no above or below sounds, and if you move your head around, it often falls out of alignment with the other speakers.
32-bit is only useful for studio audio because every time you layer an audio effect over another, it introduces a little more noise. At 32-bit, the additional noise is so incredibly low that there is virtually no noise gain after layering hundreds of effects.
192kHz is also only really useful for studios, and I believe the only ones who actually can take advantage of that sample rate are the recording orchestras. The most a person needs is 48kHz because that leaves enough accuracy for the highest frequencies (~20kHz) possible for a human to hear.
THD and SNR are probably the most important things that can be noticed from the modern DAC, and even then, we don't need that much. The brain fills in missing information all the time.
The idiot who is talking about human evolution is living in a fantasy world. When the most you could get out of a recording was 6-bits, of course they would be fine with it because they simply have not heard anything with a higher resolution before. Now we are at the point where you can get impossibly accurate reproduction despite massive compression ratios. Humans can recognize patterns, so when patterns of encoders artifacting were audible, it would be possible to tell when it was a compressed song or an uncompressed song. The same thing is true for new games vs old games. Now that we see the higher clarity, it's hard to go back to where there was much less. Those terrible encoders were more than a decade ago, and today, the weakest links are the actual physical components that reproduce sound.
I have a song that I always thought was poorly mastered because it distorted at the louder areas when a lot of different sounds were going on. It was the same problem in my car, it sounded distorted, regardless of the audio levels coming out of my phone. One day I got a pair of Fostex T50RPs and I noticed a lot of details in the songs that otherwise weren't there. Eventually I came across that (previously) distorted song, but this time, it sounded clear and powerful. Physically, the traditonal (and cheap) speaker cone materials can not retain their shape and rigidity to effectively reproduce sounds that change faster than the materials can move.
That being said, my biggest gripe with these headphones is the lack of bass, so I got 5.0 speakers for about the same price as those headphones and used a $40 amp to power the speakers until I could get a proper home theater audio receiver.
Originally posted by caligula
View Post
2) The stereo recording of audio is all you need as long as it has been properly mixed. The problem with listening to 7.1 sounds with a pair of headphones that can do 2.0 stereo is that you have to have a really good matrix or surround sound mixer in software. Even a home theatre with 7.1 surround sound isn't perfect because there's no above or below sounds, and if you move your head around, it often falls out of alignment with the other speakers.
Comment