Originally posted by Weasel
View Post
Announcement
Collapse
No announcement yet.
PipeWire Should Be One Of The Exciting Linux Desktop Technologies For 2019
Collapse
X
-
Originally posted by Weasel View PostWell even if so (though it's probably rare, and they have to be very close in frequency but not quite, to create "beats"), you can always embed the change (you know, downsampling mathematical formulas, no need for anything fancy) into the audible spectrum, so anything above 20khz is useless (except for your dog).
e.g. you mix/record/edit at 24-bit 192khz and then downsample so all of it is within audible spectrum at the end to 16-bit 44khz or whatever. This way there's no waste of useless information and everyone can hear the "interference" without needing expensive equipment. I'm sure this is what every studio does, though.
I have taken a 192Khz 24bit and down mixed it to 96/24, 44/16, and mp3 at its highest quality setting. Everyone I have tested can easily tell mp3 -> 44/16 as clear as day. The 44/16 -> 96/24 people could hear but it was a much smaller difference, a couple couldn't tell a difference. I didn't test anyone that could tell the 96/24 -> 192/24 including myself. I should mention that this is when really listening to the music, not on the phone at the same time, not distracted by something else but sitting in front of the speakers in the sweet spot and really listening to the music. Taking in every subtly and nuance, building the sound stage in your mind, the placement of each musician. Let the music envelop you and imagine being in the front row at a concert and they are playing for only you, taking in all of the audio beauty.
I will also put in a caveat, if you are going to do the test above, they have to be a properly recorded and mixed tracks. Just because it is in a high resolution format doesn't mean it contains any more information than a low quality recording. The above testing is why I believe in the higher frequency format.
Comment
-
Originally posted by vsteel View PostI have taken a 192Khz 24bit and down mixed it to 96/24, 44/16, and mp3 at its highest quality setting. Everyone I have tested can easily tell mp3 -> 44/16 as clear as day. The 44/16 -> 96/24 people could hear but it was a much smaller difference, a couple couldn't tell a difference. I didn't test anyone that could tell the 96/24 -> 192/24 including myself.
Originally posted by vsteel View Post
They don't need to be close in frequency, think Fourier, many different frequencies can build a different shape, doesn't mean we can hear them all. It also has an effect on how the speaker does or tries (depending on the speaker and recording) to generate the sound waves we can hear. You are getting hung up on thinking people are trying to hear above 20Khz and not how it changes what we can hear. Go on YouTube and find a video where they play sounds from different shaped waves, it might help.
(I'll admit that it's possible that the supersonic frequencies are inducing more audible harmonics, but let's follow Occam's razor and rule out the simpler, more testable explanations first.)
It's far more likely that you've grown to like the sound of distortion induced into audible frequencies by feeding supersonic frequencies to speakers not designed to deal with them, which means that you're not hearing audio that's accurate to the original recording environment and what you're hearing could just as easily have been reproduced using a DSP filter on the recording end and a 44.1/16 FLAC file.
It's a known problem that feeding out-of-range sounds to speakers can cause them to distort, so set up a testing scenario where you compare the input audio to the recorded output of the speakers and look at the waveforms for the audible ranges when you feed in audio with and without a bandpass filter matching the speakers' rated range.
- Likes 1
Comment
-
Originally posted by vsteel View PostThey don't need to be close in frequency, think Fourier, many different frequencies can build a different shape, doesn't mean we can hear them all. It also has an effect on how the speaker does or tries (depending on the speaker and recording) to generate the sound waves we can hear. You are getting hung up on thinking people are trying to hear above 20Khz and not how it changes what we can hear. Go on YouTube and find a video where they play sounds from different shaped waves, it might help.
And two summed signals have exactly their summed frequencies. When they are close together, they form "beats", so technically you can say you hear a "low frequency volume modulation". Even so, there's actually no low frequency involved at all, but I digress. This is pure math at work.
Here's an exercise for you: take any signal you want. Take its DFT (see its spectrum or sonogram, for example Wavosaur can do this for free). Do the same for another signal.
Take a screenshot. Mix them. Check the mixed spectrum and sonogram and compare screenshots.
They will always be the sum of both signals. No other frequencies are "created" here. So a signal above 20khz? Yeah it's not going to create any frequencies below 20khz, no matter how you mix it (linearly) and with what.
To create such frequencies you'd have to do non-linear summing, which is not how things get mixed. Even then I believe it will only create harmonics above the signal itself and not below.
Originally posted by vsteel View PostI have taken a 192Khz 24bit and down mixed it to 96/24, 44/16, and mp3 at its highest quality setting. Everyone I have tested can easily tell mp3 -> 44/16 as clear as day. The 44/16 -> 96/24 people could hear but it was a much smaller difference, a couple couldn't tell a difference. I didn't test anyone that could tell the 96/24 -> 192/24 including myself. I should mention that this is when really listening to the music, not on the phone at the same time, not distracted by something else but sitting in front of the speakers in the sweet spot and really listening to the music. Taking in every subtly and nuance, building the sound stage in your mind, the placement of each musician. Let the music envelop you and imagine being in the front row at a concert and they are playing for only you, taking in all of the audio beauty.
You need to do a blind ABX test and come up with like 90% or more success rate (you need to do many tests of the same thing, randomly alternating them, completely blind to the person listening, that's why it's called a blind ABX test). 50% success rate is guesswork at best.
Of course, you need proper recording and proper equipment, but you won't hear the difference.
You can delay the normal wave by a bit (as mp3 does) and then subtract the two waves to see and hear the "difference" between them. Try it and you'll be surprised how inaudible the difference is for a 320kbps mp3, and mp3 is a crap format (AAC or Opus are much better)!Last edited by Weasel; 11 February 2019, 02:31 PM.
Comment
-
Originally posted by ssokolow View PostThat doesn't sound right to me. The human cochlea is essentially a Fourier transformer, with each region responding to a specific range of frequencies, overlapping with its neighbours and centered on a zone of peak sensitivity. As such, it should also function as a lowpass filter by ignoring any frequencies above the top of its range.
The eye is similar but has only 3 bandpass filters, for Red, Green and Blue. (talking about colors only, since those are with frequencies)
Originally posted by ssokolow View Post(I'll admit that it's possible that the supersonic frequencies are inducing more audible harmonics, but let's follow Occam's razor and rule out the simpler, more testable explanations first.)
Of course they can introduce audible harmonics below, due to errors like aliasing. That's definitely not the "correct" signal!
Originally posted by ssokolow View PostIt's far more likely that you've grown to like the sound of distortion induced into audible frequencies by feeding supersonic frequencies to speakers not designed to deal with them, which means that you're not hearing audio that's accurate to the original recording environment and what you're hearing could just as easily have been reproduced using a DSP filter on the recording end and a 44.1/16 FLAC file.
It's a known problem that feeding out-of-range sounds to speakers can cause them to distort, so set up a testing scenario where you compare the input audio to the recorded output of the speakers and look at the waveforms for the audible ranges when you feed in audio with and without a bandpass filter matching the speakers' rated range.
Comment
-
Originally posted by Vistaus View Post
'tieing' and 'tying' are both correct, according to the famous Merriam-Webster dictionary (https://www.merriam-webster.com/dictionary/tie "tying\ ˈtī-iŋ \ or tieing"), although 'tying' is more commonly used.
Comment
-
Originally posted by Weasel View PostErr, two waves have indeed different shape but ears hear in frequency. You don't hear "shape", you hear the frequencies. Ears function like an array of bandpass filters.
And two summed signals have exactly their summed frequencies. When they are close together, they form "beats", so technically you can say you hear a "low frequency volume modulation". Even so, there's actually no low frequency involved at all, but I digress. This is pure math at work.
Originally posted by Weasel View PostI'm sorry to say but this is complete bullshit or your tests were just plain wrong.
If you are tasting Coke versus Pepsi do you go and get confidence levels, design of experiments, double blind tests and, a control group? Bet you didn't, you just taste them and make a decision. Not every decision in life needs a thesis.
I like the 96/24 flac format because to me it sounds the best, even if you are not sure you could hear a difference why have a container format that might be a limiting factor. Space is so cheap these days the format sizes are of almost no consequence.
In the real world things interact different than what they teach you in college text books. In the books there are a lot of assumptions (perfect sphere in a vacuum) I know of many times I have had a college graduate tell me "I never saw that in the books". Yea welcome to the real world where things are not all nice and tidy, corner cases exist and there are factors that you have never considered which cause interactions and unintended consequences.
I am going to bow out of this conversation now because I have given all of the information in previous posts so there is nothing else to add and it would be a rehash. I will let you either think me a fool, but if you do make sure you picture me as a happy fool because I love my sound system and how it is setup. Or you can try and "win" the internet and post more stuff you find on Wikipedia and your Sophomore text books trying to tell me I can't hear a difference. If you do, I would still urge you to go out and find the good equipment to actually listen to the difference. Then you may decide I am correct and enjoy your own audiophile path or not hear a difference and think me a fool. Remember though, picture me as a happy fool.
Last edited by vsteel; 18 February 2019, 05:47 PM.
Comment
-
Originally posted by vsteel View PostI wonder if there will be a way to have bit perfect audio with PipeWire? Currently I can bypass PulseAudio and send music directly to ALSA without any kind of conversions. I didn't see any mention of this on the website when I looked.
When I am playing music right now I use Deadbeef to take my 192Khz/24bit FLAC files and send it directly to ALSA. It does lock sound out of the rest of the system as Deadbeef has ALSA locked down but I am good with that, I don't want other garbage messing up my music. I don't want a bunch of resampling, stray sounds or, shaping of the music. I then take the digital output of my sound card and send it to my external DAC and the rest of my sound system.
Previously, all my FLAC 44.1kHz, 96kHz are all resampled to 48kHz.
OnCode:~/.config/pipewire/pipewire.conf
Code:#default.clock.allowed-rates = [ 48000 ]
Code:default.clock.allowed-rates = [ 44100,48000,88200,96000,192000 ]
Code:/proc/asound/card1/stream0 | grep Hz
- Likes 1
Comment
Comment