Announcement

Collapse
No announcement yet.

Ubuntu Desires Lower Audio Latency For Gaming

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Hamish Wilson
    replied
    Originally posted by RealNC View Post
    Yeah, I'm joking. Point is, the hw mixing argument is just an excuse for shitty sw mixing implementations. In the age of dual core CPUs being the low end, not being able to do proper audio mixing sounds more like a bad joke to me.
    Not that I am necessarily casting off hardware mixing like you are, but I am doing all of my mixing in software on a single core Sempron and have never had an issue.

    Leave a comment:


  • Hamish Wilson
    replied
    Originally posted by psycho_driver View Post
    I've never had my dmix setup induce audio stuttering into games/other highly cpu intensive apps like PA does every other time.
    And I never had any audio stuttering with PulseAudio, and I hardly have the worlds most powerful setup.

    Leave a comment:


  • Hamish Wilson
    replied
    Originally posted by curaga View Post
    Yay, flamebaiting! Name one thing he has fixed, instead of made worse.
    Both PulseAudio and SystemD were created as approaches to help simplify and unify the Linux desktop and offer a greater set of desktop features. NetworkManger, although not by Lennart Poettering, was too and I often hear it be much maligned by some, even though for my use it is a killer feature of Linux. But in the end I see no need to argue the quality of his work with you, as you obviously have your own entrenched positions and already actively dislike the work he has done.

    The real point of my post though is to simply point out that Ubuntu states these lofty goals but then puts little to no resources behind achieving them, while poor Poettering actually goes out and tries to get things accomplished to these ends and is decried as an evil for doing so by often the same people who hold Ubuntu in such high esteem. It is getting quite irritating.

    Originally posted by Lattyware View Post
    And loose all the benefits of PulseAudio? I don't get why people still hate PA, it's a great bit of software that does really cool stuff - it's perfect for gamers. Sure, it could do with some latency reduction, apparently, but that doesn't mean it's bad as a thing to have. PA can do stuff like assign certain audio streams to certain output devices really easily - for example, send chat to my headphones and game audio to my speakers - that is really useful for gamers.
    Indeed, PulseAudio is very useful to have when dealing with peripheral audio devices like headphones or headsets

    Leave a comment:


  • ninez
    replied
    Originally posted by Ancurio View Post
    Except ALSA is not a soundserver, and anyone using PA is using ALSA because that's what PA is using. ALSA is low lever, PA is high level.
    Except that you would be wrong (in part). ALSA has *both* user-space and kernel side code. PA replaces ALSA's user-side (when in use) ie: things like dmix. and pa uses it's adapters to then handle it... But when using just ALSA - you will be using it's userspace components, instead of PA... You obviously don't really understand linux' sound systems/plumbing if you can make such a silly comment.

    And regardless, my comparsion of zita-ajbridge/ALSA(user-space) vs. alsa_in/out/Pulseaudio in terms of stuttering, still stands.
    Last edited by ninez; 02 November 2012, 04:37 PM.

    Leave a comment:


  • Ancurio
    replied
    Originally posted by ninez View Post
    Yeah, something that i find analogous (on my jackd/ffado system) to the stuttering of PA, would be when i need to route an alsa app (ie: cannot use jackd directly - things like adobe flash, VMware, darkplays 11.1.c.a, skype(? i don't use it), etc). I have a couple of choices that use snd-aloop (alsa loopback device / virtual device) that these alsa apps will use - then i can use either alsa_in/alsa_out (tools that come with jackd that expose the loopback device into jack, as clients) or i can use zita-ajbridge....

    Well, in this scenario alsa_in/out would be PA, while zita-ajbridge would be ALSA.

    zita-bridge - solid, fast with no stuttering.
    alsa_in/alsa_out - 'can' be clunky/choppy in some scenarios (while in others being just 'okay'.) It also can be a bit lossy, unless i want to throw a little cpu at the problem.

    So obviously, you can imagine which solution that i personally use - zita-ajbridge instead of alsa_in/out, hands down. (and thus, in the scenario of ALSA vs. PA - i would be using alsa... Although, there are cases where PA is really needed, as discussed many many times here and elsewhere - i just wish ALSA had of been adapted/modernized/improved rather than introducing yet another soundserver (but that's just kicking a dead horse and isn't really my problem anyway... + if those were my only two choices i would probably be using CoreAudio, instead.).

    cheerz
    Except ALSA is not a soundserver, and anyone using PA is using ALSA because that's what PA is using. ALSA is low lever, PA is high level.

    Leave a comment:


  • RealNC
    replied
    Originally posted by energyman View Post
    except that hw mixing just works and does not introduce any delays. Hmmmm...
    Hardware mixing doesn't work. Why? Because my hardware doesn't do mixing. So how can it work?

    Yeah, I'm joking. Point is, the hw mixing argument is just an excuse for shitty sw mixing implementations. In the age of dual core CPUs being the low end, not being able to do proper audio mixing sounds more like a bad joke to me.

    Leave a comment:


  • energyman
    replied
    Originally posted by gQuigs View Post
    That seems like the only benefit for the *average* user, but pretty much no *average* user is ever going to modify per-application volume in this way. A "hack" to allow the volume control applet to directly modify volumes exposed from applications would let us get this benefit, while keeping the stack just ALSA.
    it is already there: hwmixer. It works. Well.

    Just don't use shitastic hardware and then complain.

    Leave a comment:


  • energyman
    replied
    Originally posted by RealNC View Post
    HW mixing is useless though. It was nice on my 486. That Gravis Ultrasound rocket the boat. But today, mixing can be done on the CPU so easily, it's not worth having it in HW.
    except that hw mixing just works and does not introduce any delays. Hmmmm...

    Leave a comment:


  • ninez
    replied
    Originally posted by psycho_driver View Post
    I've never had my dmix setup induce audio stuttering into games/other highly cpu intensive apps like PA does every other time.
    Yeah, something that i find analogous (on my jackd/ffado system) to the stuttering of PA, would be when i need to route an alsa app (ie: cannot use jackd directly - things like adobe flash, VMware, darkplays 11.1.c.a, skype(? i don't use it), etc). I have a couple of choices that use snd-aloop (alsa loopback device / virtual device) that these alsa apps will use - then i can use either alsa_in/alsa_out (tools that come with jackd that expose the loopback device into jack, as clients) or i can use zita-ajbridge....

    Well, in this scenario alsa_in/out would be PA, while zita-ajbridge would be ALSA.

    zita-bridge - solid, fast with no stuttering.
    alsa_in/alsa_out - 'can' be clunky/choppy in some scenarios (while in others being just 'okay'.) It also can be a bit lossy, unless i want to throw a little cpu at the problem.

    So obviously, you can imagine which solution that i personally use - zita-ajbridge instead of alsa_in/out, hands down. (and thus, in the scenario of ALSA vs. PA - i would be using alsa... Although, there are cases where PA is really needed, as discussed many many times here and elsewhere - i just wish ALSA had of been adapted/modernized/improved rather than introducing yet another soundserver (but that's just kicking a dead horse and isn't really my problem anyway... + if those were my only two choices i would probably be using CoreAudio, instead.).

    cheerz

    Leave a comment:


  • ninez
    replied
    Originally posted by Lynxeye View Post
    You know sound is a considerable slower media than light. Your brain can't even tell apart single pictures if they are shown within 16ms, most people can't even at 40ms. If you have a really trained ear you'll be able to tell apart sounds with 10ms latency, but I doubt [b]you[/] are able to do so. Just remember 25ms is the latency of the sound from a piano standing 10m away from you. Can you really tell the latency between the pianist triggering the string and you hearing it?

    Bringing down latency to the technical minimum is just a waste of energy for the sake of some retards that use the latency numbers as a kind of benchmark.
    First off, you are in no position to be claiming what i can or can't perceive (nor can you make that claim about anyone else). secondly, Ears detect latencies that your sight can't even come close to detecting and that is a fact... third, humans can detect less than 10ms when it comes to audio. Fourth, I can easily tell the difference in latency when i am standing 10m away from my piano vs. sitting in front of it playing. (though, a more practical example would be using a keyboard + speaker, followed by moving the speaker 10ms away and testing again... Just like i can tell the difference between standing right next to my guitar amp vs. being at the other side of the basement (less than 10m). bringing down the latency to a technical minimum isn't a waste of energy, nor is it some 'benchmark'.

    if you can't tell the difference between being right in front of a sound source vs. being 10m away ~ then you clearly don't have very sensitive ears. (if fact, i would say you have a mild handycap). So rather than going on about this, maybe instead you should take two of the same wav file, double track them and offset the 2nd wave by 25ms and actually see if you can tell the difference - if you still can't under that circumstance - you have terrible hearing.

    Leave a comment:

Working...
X