Announcement

Collapse
No announcement yet.

PipeWire Should Be One Of The Exciting Linux Desktop Technologies For 2019

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • therealbene
    replied
    Originally posted by vsteel View Post
    I wonder if there will be a way to have bit perfect audio with PipeWire? Currently I can bypass PulseAudio and send music directly to ALSA without any kind of conversions. I didn't see any mention of this on the website when I looked.

    When I am playing music right now I use Deadbeef to take my 192Khz/24bit FLAC files and send it directly to ALSA. It does lock sound out of the rest of the system as Deadbeef has ALSA locked down but I am good with that, I don't want other garbage messing up my music. I don't want a bunch of resampling, stray sounds or, shaping of the music. I then take the digital output of my sound card and send it to my external DAC and the rest of my sound system.
    I know this is an old post, but I managed to play without up/downsample for my supported DAC by altering the pipewire setting (I'm using Garuda Linux, an Arch derivative).

    Previously, all my FLAC 44.1kHz, 96kHz are all resampled to 48kHz.

    On
    Code:
    ~/.config/pipewire/pipewire.conf
    (if it doesn't exists, create it by copying /usr/share/pipewire/pipewire.conf), I altered the default

    Code:
    #default.clock.allowed-rates = [ 48000 ]
    to

    Code:
     default.clock.allowed-rates = [ 44100,48000,88200,96000,192000 ]
    Now I can confirm from running cat
    Code:
    /proc/asound/card1/stream0 | grep Hz
    that the output sound follows the audio file being played.

    Leave a comment:


  • vsteel
    replied
    Originally posted by Weasel View Post
    Err, two waves have indeed different shape but ears hear in frequency. You don't hear "shape", you hear the frequencies. Ears function like an array of bandpass filters.

    And two summed signals have exactly their summed frequencies. When they are close together, they form "beats", so technically you can say you hear a "low frequency volume modulation". Even so, there's actually no low frequency involved at all, but I digress. This is pure math at work.
    I know there is no new frequencies created and I have stated numerous times in the past. This is how waves interact in an analog environment and they all meet at the imperfect speaker trying to reproduce all of those frequencies at once.

    Originally posted by Weasel View Post
    I'm sorry to say but this is complete bullshit or your tests were just plain wrong.
    No my tests are not wrong they are just not scientific, which I never said they were.

    If you are tasting Coke versus Pepsi do you go and get confidence levels, design of experiments, double blind tests and, a control group? Bet you didn't, you just taste them and make a decision. Not every decision in life needs a thesis.

    I like the 96/24 flac format because to me it sounds the best, even if you are not sure you could hear a difference why have a container format that might be a limiting factor. Space is so cheap these days the format sizes are of almost no consequence.

    In the real world things interact different than what they teach you in college text books. In the books there are a lot of assumptions (perfect sphere in a vacuum) I know of many times I have had a college graduate tell me "I never saw that in the books". Yea welcome to the real world where things are not all nice and tidy, corner cases exist and there are factors that you have never considered which cause interactions and unintended consequences.

    I am going to bow out of this conversation now because I have given all of the information in previous posts so there is nothing else to add and it would be a rehash. I will let you either think me a fool, but if you do make sure you picture me as a happy fool because I love my sound system and how it is setup. Or you can try and "win" the internet and post more stuff you find on Wikipedia and your Sophomore text books trying to tell me I can't hear a difference. If you do, I would still urge you to go out and find the good equipment to actually listen to the difference. Then you may decide I am correct and enjoy your own audiophile path or not hear a difference and think me a fool. Remember though, picture me as a happy fool.



    Last edited by vsteel; 18 February 2019, 05:47 PM.

    Leave a comment:


  • JAYL
    replied
    Originally posted by Vistaus View Post

    'tieing' and 'tying' are both correct, according to the famous Merriam-Webster dictionary (https://www.merriam-webster.com/dictionary/tie "tying\ ˈtī-​iŋ \ or tieing"), although 'tying' is more commonly used.
    Huh, I never knew that. Tieing looks wrong to me

    Leave a comment:


  • Weasel
    replied
    Originally posted by ssokolow View Post
    That doesn't sound right to me. The human cochlea is essentially a Fourier transformer, with each region responding to a specific range of frequencies, overlapping with its neighbours and centered on a zone of peak sensitivity. As such, it should also function as a lowpass filter by ignoring any frequencies above the top of its range.
    It's similar to how the eye works. The ear has a lot of "bandpass filters" that get excited by only a narrow range of frequencies. Those then send this signal to the brain. It's like an analog DFT. (this is also how analog Vocoders work without Fourier transform)

    The eye is similar but has only 3 bandpass filters, for Red, Green and Blue. (talking about colors only, since those are with frequencies)

    Originally posted by ssokolow View Post
    (I'll admit that it's possible that the supersonic frequencies are inducing more audible harmonics, but let's follow Occam's razor and rule out the simpler, more testable explanations first.)
    That's not possible, because harmonics are always above the waveform. Integral multiples of it, in fact.

    Of course they can introduce audible harmonics below, due to errors like aliasing. That's definitely not the "correct" signal!

    Originally posted by ssokolow View Post
    It's far more likely that you've grown to like the sound of distortion induced into audible frequencies by feeding supersonic frequencies to speakers not designed to deal with them, which means that you're not hearing audio that's accurate to the original recording environment and what you're hearing could just as easily have been reproduced using a DSP filter on the recording end and a 44.1/16 FLAC file.

    It's a known problem that feeding out-of-range sounds to speakers can cause them to distort, so set up a testing scenario where you compare the input audio to the recorded output of the speakers and look at the waveforms for the audible ranges when you feed in audio with and without a bandpass filter matching the speakers' rated range.
    Fully agreed.

    Leave a comment:


  • Weasel
    replied
    Originally posted by vsteel View Post
    They don't need to be close in frequency, think Fourier, many different frequencies can build a different shape, doesn't mean we can hear them all. It also has an effect on how the speaker does or tries (depending on the speaker and recording) to generate the sound waves we can hear. You are getting hung up on thinking people are trying to hear above 20Khz and not how it changes what we can hear. Go on YouTube and find a video where they play sounds from different shaped waves, it might help.
    Err, two waves have indeed different shape but ears hear in frequency. You don't hear "shape", you hear the frequencies. Ears function like an array of bandpass filters.

    And two summed signals have exactly their summed frequencies. When they are close together, they form "beats", so technically you can say you hear a "low frequency volume modulation". Even so, there's actually no low frequency involved at all, but I digress. This is pure math at work.

    Here's an exercise for you: take any signal you want. Take its DFT (see its spectrum or sonogram, for example Wavosaur can do this for free). Do the same for another signal.

    Take a screenshot. Mix them. Check the mixed spectrum and sonogram and compare screenshots.

    They will always be the sum of both signals. No other frequencies are "created" here. So a signal above 20khz? Yeah it's not going to create any frequencies below 20khz, no matter how you mix it (linearly) and with what.

    To create such frequencies you'd have to do non-linear summing, which is not how things get mixed. Even then I believe it will only create harmonics above the signal itself and not below.

    Originally posted by vsteel View Post
    I have taken a 192Khz 24bit and down mixed it to 96/24, 44/16, and mp3 at its highest quality setting. Everyone I have tested can easily tell mp3 -> 44/16 as clear as day. The 44/16 -> 96/24 people could hear but it was a much smaller difference, a couple couldn't tell a difference. I didn't test anyone that could tell the 96/24 -> 192/24 including myself. I should mention that this is when really listening to the music, not on the phone at the same time, not distracted by something else but sitting in front of the speakers in the sweet spot and really listening to the music. Taking in every subtly and nuance, building the sound stage in your mind, the placement of each musician. Let the music envelop you and imagine being in the front row at a concert and they are playing for only you, taking in all of the audio beauty.
    I'm sorry to say but this is complete bullshit or your tests were just plain wrong.

    You need to do a blind ABX test and come up with like 90% or more success rate (you need to do many tests of the same thing, randomly alternating them, completely blind to the person listening, that's why it's called a blind ABX test). 50% success rate is guesswork at best.

    Of course, you need proper recording and proper equipment, but you won't hear the difference.

    You can delay the normal wave by a bit (as mp3 does) and then subtract the two waves to see and hear the "difference" between them. Try it and you'll be surprised how inaudible the difference is for a 320kbps mp3, and mp3 is a crap format (AAC or Opus are much better)!
    Last edited by Weasel; 11 February 2019, 02:31 PM.

    Leave a comment:


  • ssokolow
    replied
    Originally posted by vsteel View Post
    I have taken a 192Khz 24bit and down mixed it to 96/24, 44/16, and mp3 at its highest quality setting. Everyone I have tested can easily tell mp3 -> 44/16 as clear as day. The 44/16 -> 96/24 people could hear but it was a much smaller difference, a couple couldn't tell a difference. I didn't test anyone that could tell the 96/24 -> 192/24 including myself.
    I wouldn't draw any conclusions until you're comparing lossless formats. MP3 is specifically designed to throw out components its designers didn't expect you to be able to distinguish and they could have been wrong, or an encoder could have been designed to be "good enough" by someone without your ears.

    Originally posted by vsteel View Post

    They don't need to be close in frequency, think Fourier, many different frequencies can build a different shape, doesn't mean we can hear them all. It also has an effect on how the speaker does or tries (depending on the speaker and recording) to generate the sound waves we can hear. You are getting hung up on thinking people are trying to hear above 20Khz and not how it changes what we can hear. Go on YouTube and find a video where they play sounds from different shaped waves, it might help.
    That doesn't sound right to me. The human cochlea is essentially a Fourier transformer, with each region responding to a specific range of frequencies, overlapping with its neighbours and centered on a zone of peak sensitivity. As such, it should also function as a lowpass filter by ignoring any frequencies above the top of its range.

    (I'll admit that it's possible that the supersonic frequencies are inducing more audible harmonics, but let's follow Occam's razor and rule out the simpler, more testable explanations first.)

    It's far more likely that you've grown to like the sound of distortion induced into audible frequencies by feeding supersonic frequencies to speakers not designed to deal with them, which means that you're not hearing audio that's accurate to the original recording environment and what you're hearing could just as easily have been reproduced using a DSP filter on the recording end and a 44.1/16 FLAC file.

    It's a known problem that feeding out-of-range sounds to speakers can cause them to distort, so set up a testing scenario where you compare the input audio to the recorded output of the speakers and look at the waveforms for the audible ranges when you feed in audio with and without a bandpass filter matching the speakers' rated range.

    Leave a comment:


  • JAYL
    replied
    Originally posted by tildearrow View Post

    Not a typo.
    Am I crazy? Shouldn't it have been "tying"

    Leave a comment:


  • vsteel
    replied
    Originally posted by Weasel View Post
    Well even if so (though it's probably rare, and they have to be very close in frequency but not quite, to create "beats"), you can always embed the change (you know, downsampling mathematical formulas, no need for anything fancy) into the audible spectrum, so anything above 20khz is useless (except for your dog).

    e.g. you mix/record/edit at 24-bit 192khz and then downsample so all of it is within audible spectrum at the end to 16-bit 44khz or whatever. This way there's no waste of useless information and everyone can hear the "interference" without needing expensive equipment. I'm sure this is what every studio does, though.
    They don't need to be close in frequency, think Fourier, many different frequencies can build a different shape, doesn't mean we can hear them all. It also has an effect on how the speaker does or tries (depending on the speaker and recording) to generate the sound waves we can hear. You are getting hung up on thinking people are trying to hear above 20Khz and not how it changes what we can hear. Go on YouTube and find a video where they play sounds from different shaped waves, it might help.

    I have taken a 192Khz 24bit and down mixed it to 96/24, 44/16, and mp3 at its highest quality setting. Everyone I have tested can easily tell mp3 -> 44/16 as clear as day. The 44/16 -> 96/24 people could hear but it was a much smaller difference, a couple couldn't tell a difference. I didn't test anyone that could tell the 96/24 -> 192/24 including myself. I should mention that this is when really listening to the music, not on the phone at the same time, not distracted by something else but sitting in front of the speakers in the sweet spot and really listening to the music. Taking in every subtly and nuance, building the sound stage in your mind, the placement of each musician. Let the music envelop you and imagine being in the front row at a concert and they are playing for only you, taking in all of the audio beauty.

    I will also put in a caveat, if you are going to do the test above, they have to be a properly recorded and mixed tracks. Just because it is in a high resolution format doesn't mean it contains any more information than a low quality recording. The above testing is why I believe in the higher frequency format.

    Leave a comment:


  • profoundWHALE
    replied
    Originally posted by Weasel View Post
    Interesting. I admit I know more about theoretical DSP stuff so more about digital waves than what type of energy you need to reproduce analog waves (as that's what I know to do and coded). I do know, though, that for light, higher frequency waves have more energy, so this seems like it's the inverse, based on what you said.
    I should clarify, when I talk about energy, I'm referring primarily to kinetic or mechanical energy, as in, the ability for the waves to shake your eardrums.

    Leave a comment:


  • Weasel
    replied
    Originally posted by profoundWHALE View Post
    You've mostly got it.

    It is hard to make a low frequency sound that is loud. It's not uncommon to have subwoofers that operate in the thousands of watts.

    As you move into the 500Hz range it becomes MUCH easier, requiring less energy to move the air, but your ears are also increasingly sensitive (tuned) to those frequencies. This continues up to ~5kHz. The sensitivity of your ears stay at that gain until ~8kHz, but because there is less energy in the waves, it is less painful. The sensitivity of your ears and the energy within the wave decreases at the same time, resulting in less damage.

    Is it possible for a 25kHz wave to damage your ears at 150db? You'd have to be near it. Remember that the amplitude of the sound is inversely proportional to the square of the radius from you to the source.
    Interesting. I admit I know more about theoretical DSP stuff so more about digital waves than what type of energy you need to reproduce analog waves (as that's what I know to do and coded). I do know, though, that for light, higher frequency waves have more energy, so this seems like it's the inverse, based on what you said.

    Leave a comment:

Working...
X