Announcement

Collapse
No announcement yet.

Khronos Expands Focus On Safety Critical APIs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by starshipeleven View Post
    Ridicolously high like 2% when playing music and 0.1% when not (just looked at with htop)?

    No it is not.
    Do you I really need me to pull out some mathematics on you? I've seen Pulse consume anywhere from 2% to 10% all by itself, not counting the player. Even if you don't know how to calculate latencies, it should be pretty obvious that 2% of multi-ghz is stupid high.

    A 66mhz 486DX can play music at 2% CPU load on win3.1.
    Last edited by duby229; 29 July 2016, 09:20 AM.

    Comment


    • #32
      Originally posted by duby229 View Post
      Do you I really need me to pull out some mathematics on you? I've seen Pulse consume anywhere from 2% to 10% all by itself, not counting the player. Even if you don't know how to calculate latencies, it should be pretty obvious that 2% of multi-ghz is stupid high.
      Since when CPU usage is an indication of latency again?

      Comment


      • #33
        Originally posted by starshipeleven View Post
        Since when CPU usage is an indication of latency again?
        Really? A clock cycle is a measure of time. In the case above you have to consider it in terms of duty cycle. A. higher frequency processor has a smaller measure of time per cycle. If it used the same number of clocks, it would have lower latency than a lower clocked processor. But if it uses more clocks to accomplish the same thing, the that can be considered in terms of duty cycles, which is a measure of latency.

        You troll on a computer related forum everyday, but you don't know relationship between bandwidth and latency?

        EDIT: Really, if you consider it, The reason why PA sucks ass so hard is exactly the same reason why DDR4 sucks ass so hard.
        Last edited by duby229; 29 July 2016, 10:42 AM.

        Comment


        • #34
          Originally posted by duby229 View Post

          It's mostly PA fault, but Alsa is to blame as well, they try to do way too much in the kernel and a lot of those functions should be in userspace. PA should never have needed to be conceived.
          Actually too little is done in the kernel which is why PA is needed. If the kernel did proper sound hardware abstraction PA would not be needed. Also the ALSA API is aweful and awefully documented (the only existing documentation is old and was wrong even when new), which is part of the reason why no driver implements it right, and no application uses it right.

          Comment


          • #35
            Originally posted by carewolf View Post
            Actually too little is done in the kernel which is why PA is needed. If the kernel did proper sound hardware abstraction PA would not be needed. Also the ALSA API is aweful and awefully documented (the only existing documentation is old and was wrong even when new), which is part of the reason why no driver implements it right, and no application uses it right.
            I definitely don't agree there should be more in the kernel, but I do agree it's aweful. When Alsa was new there was a lot of good thing things said about it, so I liked it. But now that time has passed and we've experienced its flaws, I think now it needs replaced with something designed from the start with a hardware interface in the kernel and lightest userspace abstraction as possible. Alsa doesn't fit that bill and neither does PA.

            EDIT: Literally I think the only thing the userspace sound service should do is configure the sound interface and passthrough audio. Period. Nothing else at all And the kernel sound interface should be designed from the beginning so that audio drivers can be configured from that userspace service. . There shouldn't be no alsamixer, no asoundrc. Literally all of it should be only configurable through the sound service.
            Last edited by duby229; 29 July 2016, 12:20 PM.

            Comment


            • #36
              Originally posted by duby229 View Post
              Really?
              Yes, because there are a bit more factors that influence application latency, that are a bit mindboggingly bigger than that.
              Like drive latency (jack uses tmpfs for a reason), or CPU scheduler priority (jack can be launched with realtime priority, if kernel supports that, and you really want your kernel to support that).

              Jack can run at 3-8% CPU with less than 10ms latencies on the same PCs that show 150ms with PulseAudio (and less CPU load, btw).

              But Jack requires setup and isn't as flexible as PA, which is why it's used in audio workstations mostly.

              The reason why PA sucks ass so hard is exactly the same reason why DDR4 sucks ass so hard.
              I've yet to encounter a DDR4 bank that "sucks ass so hard" just because it has slightly higher latency at lower frequencies. All applications are mostly bandwith-dependent, and even then it's HARD to notice.
              (only iGPUs notice as they do need tons of bandwith).
              Last edited by starshipeleven; 29 July 2016, 12:23 PM.

              Comment


              • #37
                Originally posted by duby229 View Post
                EDIT: Literally I think the only thing the userspace sound service should do is configure switches and passthrough audio. Period. Nothing else at all
                Yeah, if modern audio hardware had hardware mixing capability it could be doable without restricting applications to use audio ONE AT A TIME.

                Since that isn't possible you're forced to mix it somehow and dump a SINGLE stream from MULTIPLE sources on the dumb shitty audio hardware.

                Comment


                • #38
                  Originally posted by duby229 View Post
                  EDIT: Literally I think the only thing the userspace sound service should do is configure the sound interface and passthrough audio. Period. Nothing else at all And the kernel sound interface should be designed from the beginning so that audio drivers can be configured from that userspace service. . There shouldn't be no alsamixer, no asoundrc. Literally all of it should be only configurable through the sound service.
                  Most modern audio hadrware does not support hardware mixing, so either you force applications to use audio ONE AT A TIME or do some kind of mixing.
                  Also you need to run some kind of chaperon software for dynamic addition/removal of audio hardware that happens commonly (like with usb headsets, or with bluetooth audio).

                  Sure PA can be better, but your dream of "no let's make it passthrough-only" is not viable.
                  Last edited by starshipeleven; 29 July 2016, 12:31 PM.

                  Comment


                  • #39
                    Originally posted by starshipeleven View Post
                    Yes, because there are a bit more factors that influence application latency, that are a bit mindboggingly bigger than that.
                    Like drive latency (jack uses tmpfs for a reason), or CPU scheduler priority (jack can be launched with realtime priority, if kernel supports that, and you really want your kernel to support that).

                    Jack can run at 3-8% CPU with less than 10ms latencies on the same PCs that show 150ms with PulseAudio (and less CPU load, btw).

                    But Jack requires setup and isn't as flexible as PA, which is why it's used in audio workstations mostly.

                    I've yet to encounter a DDR4 bank that "sucks ass so hard" just because it has slightly higher latency at lower frequencies. All applications are mostly bandwith-dependent, and even then it's HARD to notice.
                    (only iGPUs notice as they do need tons of bandwith).
                    Jack only passes audio through. It just routes the stream. laspa is what what you are actually talking about. Personally I don't think source audio should ever be resampled. If the hardware requires it, then the hardware driver should do it.

                    Comment


                    • #40
                      Originally posted by starshipeleven View Post
                      Most modern audio hadrware does not support hardware mixing, so either you force applications to use audio ONE AT A TIME or do some kind of mixing.
                      Also you need to run some kind of chaperon software for dynamic addition/removal of audio hardware that happens commonly (like with usb headsets, or with bluetooth audio).

                      Sure PA can be better, but your dream of "no let's make it passthrough-only" is not viable.
                      As far as I'm concerned mixing should be a driver function. Honestly it should be part of the kernel interface. Hardware with or without mixers should be completely transparent. So you are right, but you don't seem to understand why your right or why it's the wrong way to do things.

                      Comment

                      Working...
                      X