Announcement

Collapse
No announcement yet.

Linux Audio Is Being Further Modernized With The 4.1 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by magika View Post
    You confused distro and user here. Don't want PA, don't use it. Uninstall PA and make a breath of fresh air
    I strongly agree. Removing Pulseaudio is like standing outside on a hot summer day with a nice cool ocean breeze... although when it rains you're stuck out in the cold.

    Pulseaudio is like the heating/cooling in a house. It's great when it works, but when it doesn't, it's a huge pain. It would be simpler to just buy an umbrella for the occasion that it rains.

    For general use, Pulse should be fine. As soon as you start getting into Jack territory for performance and configuration, it's a huge pain and if you're limited CPU-wise, I would not recommend Pulse.

    Comment


    • #22
      Originally posted by soulsource View Post
      What I'd like to know, as people here tend to call dmix a "dirty hackery":
      What is so bad about dmix (compared to pulseaudio)?
      Nothing is bad compared to pulseaudio.
      The only thing good about pulseaudio is that it takes away some complexity, in return it adds considerable delay and other problems.
      I now use jackd in rt mode with qjackctl on an odroid XU3.
      It's freaking amazing!
      Start your average digital audio workstation software, add some midi controllers, start hydrogen, route output of hydrogen to daw software, including midi bpm and clocks.
      If you send the output of hydrogen to your amp out, and to the daw line in, and send the output of the daw out also to amp out you almost hear no phase shifting.
      That's very different from 200+ms average pulse audio latency.
      I might take a look at dmix too.

      Comment


      • #23
        Originally posted by Pawlerson View Post
        Blah, blah.. Linux is ways faster than OS X in OpenGL, so I could say OS X is not ready to be a desktop platform. It's graphic system is a way too messy (tm). Windows 7 doesn't recognize my logitech gamepad which works out of the box under Linux. Windows is not desktop ready yet..
        You could say that, and I would completely agree with you, OS X is absolute trash, and iOS is no better while we're at it. Windows 7 not recognizing your logitech gamepad while linux does sounds impossible though, did you install the drivers? Logitech's Linux support has always been a disgrace (read: non-existent), I like Linux. I hate OS X, I'm not a fan of Windows. I am hoping Linux will improve to surpass windows sooner rather than later, despite it's major flaws from the users/consumers perspective, I dislike it the least out of the 3 most mainstream desktop operating systems. Windows is just the lesser of two evils (the other being OS X, what a nasty OS...), Linux isn't evil like the others, which is it's virtue, but it just isn't strong (read: good for consumers) enough. Yet.

        Originally posted by gamerk2 View Post
        Linux does do one thing right: Per-application audio control. It's something I've been hoping for on Windows for ages now.
        What do you mean? I think they've had this feature in windows since XP, they've had it at least since Vista or 7. I happen to be running the 7 right now, I can mute my browser but nothing else. What windows is doing better than pulse is that if you increase the master PCM, ALL VOLUME is increased even if the individual applications are far below the master channels volume setting. When the master channel is increased the applications maintain their proportions in relation to the master channel (i.e. if something was 1% underneath the master channel's setting which is at 10%/100%, and you set it to 100% this application will remain at 9% or 10% lower than the master channel once the master channel is up. In short, all sounds are relative to the master channel, no sounds can override the master channel's volume setting (except freaky shit like VLC which can internally be configured to like 250% volume or something) it's per application volume mixing is absolutely perfect.

        But honestly the only thing I use this feature for is to occasionally (like very, very, very rarely) mute applications that are making noise I don't want to hear but can't internally disable in the programs themselves. Other times I just adjust the master channel since all sound is relative to it. It's basically the best of both worlds, if you don't like per-application mixing, just use the master channel and it will perform exactly as it does in alsa (where all applications use the setting of the master channel) because that's what they do by default, but as gloriously well done as it may be, I don't even use it except for very, extremely rarely. If I recall correctly, the biggest problem I had with pulse was that the application volume sliders weren't relative to the master setting, and I could increase the master channel setting but other applications wouldn't follow, no matter how it went down, I just remember that I passionately hated Pulseaudio's implementation of per-application volume mixing and it was one of the major reasons I couldn't stand to use pulse. It's not a feature woth the trouble it gave me and I also recall reading up how to disable it, and I recall just running into more troubles then. Pulseaudio doesn't like me any more than I like it apparently.


        Originally posted by stqn View Post
        Alsa has always worked perfectly for basic audio playback for me, but yes I will be happy when it is finally possible to:
        - set the default input device (for software that needs audio input but doesn’t let you select the device)
        - record sound output (it’s currently impossible to make screencasts with the sound with ALSA)
        - disable this fucking jack sense thing that cuts audio output to the speakers when headphones are plugged in.
        Sure you can record sound output from alsa, I've done it with ffmpeg, I admit it was a pain to figure out how to configure it, but whenever I tried to use pulse I just had massive audio lag and it was always desynched from the video, the only time my audio ever worked in my recordings was when I used alsa.

        Let's see if I can find out what the trick was again since it's been a while since I did this....

        Code:
        ffmpeg -f alsa -ac 2 -i plughw:0,0 -async 1
        That's the main part as far as audio goes, the audio parameters I used next came the usual -f x11grab -s 1920x1080 -i 0:0 followed with my encoding settings and output filename. -c:a libmp3lame -b:a 384k -c:v libx264 -b:v 4000k -profile:v high444 -r 30 -preset ultrafast output.mkv

        Or in summary:

        Code:
        ffmpeg -f alsa -ac 2 -i plughw:0,0 -async 1 -f x11grab -s 1920x1080 -i 0:0 -c:a libmp3lame -b:a 384k -c:v libx264 -b:v 4000k -profile:v high444 -r 30 -preset ultrafast output.mkv
        If I recall correctly the command might not be complete yet, but should mostly work. I can't test it right now since it seems ffmpeg isn't fond of recording when I use enlightenment. Of course you also have to replace the plughw:0,0 with the correct card/device numbers for your sound card (0,0 is the default usually)
        Last edited by rabcor; 16 April 2015, 03:57 PM.

        Comment


        • #24
          Originally posted by gamerk2 View Post
          Linux does do one thing right: Per-application audio control. It's something I've been hoping for on Windows for ages now.
          You mean that thing Windows 7, maybe Vista, has? For years?

          Comment


          • #25
            Yeah, PulseAudio really is a "works for some, not for others" kind of program. I am not an audio guy, so I don't dare claim that we don't need a sound server for modern use, but why couldn't we just fix ALSA instead of adding another layer?

            Comment


            • #26
              ^My thoughts exactly, from the moment I first started trying to configure my sound in Linux and discovered the mess called ALSA and the horror called Pulseaudio; all I could think was "Why not fix ALSA instead of creating some crap like pulse?"

              Originally posted by Ardje View Post
              Nothing is bad compared to pulseaudio.
              The only thing good about pulseaudio is that it takes away some complexity, in return it adds considerable delay and other problems.
              I now use jackd in rt mode with qjackctl on an odroid XU3.
              It's freaking amazing!
              Start your average digital audio workstation software, add some midi controllers, start hydrogen, route output of hydrogen to daw software, including midi bpm and clocks.
              If you send the output of hydrogen to your amp out, and to the daw line in, and send the output of the daw out also to amp out you almost hear no phase shifting.
              That's very different from 200+ms average pulse audio latency.
              I might take a look at dmix too.
              dmix is not a program, it's an alsa plugin that allows support for multiple audio streams to one sound device (if dmixing is not enabled in alsa and you're not running your audio through some mess like pulseaudio, you will not be able to play more than one sound source from one set of speakers at a time. E.g. one application hooks itself up to the sound device, and no other application can access it until this application stops using it.)

              As stated in that link I gave, Dmixing is enabled by default for all sound cards that don't support hardware mixing and all analogue sound devices by default. This means that basically the only times we have to manually configure Dmix is when we have a digital device to work with (like HDMI and S/PDIF and basically anything that doesn't use traditional audio jacks as far as computers are concerned)

              God knows why dmixing isn't enabled by default for digital devices, but have no fear. As shown in this example, you can manually configure dmixing in your .asoundrc or asound.conf. Basically to just use raw alsa you need dmixing, and if it is not enabled by default for your specific device, you can do this:

              Code:
              pcm.dmixed {
              	type asym
              	playback.pcm {
              		type dmix
              
              		ipc_key_add_uid true
              
              		ipc_key 5678293
              		ipc_perm 0660
              		ipc_gid audio
              
              		slave {
              			# 2 for stereo, 6 for surround51, 8 for surround71
              			channels 6
              			pcm {
              				#format S32_LE
                                              #format S24_LE
              				format S16_LE
              
              				# 44100, 48000, 96000 or 192000
              				rate 48000
              
              				nonblock true
              
                                              #The device you want to dmix is defined here:
              				type hw
              				card 0
              				device 0
              				subdevice 0
              			}
              
              			period_size 1024
              			buffer_size 8192
              		}
              	}
              	# apulse (a cut-down replacement for pulseaudio) needs dsnoop
              	# https://bbs.archlinux.org/viewtopic.php?id=187258
              	capture.pcm "dsnoop"
              }
              
              pcm.!default {
              	type plug
              	slave.pcm "dmixed"
              }

              And voila, dmixing on your default card (0,0) but you only need to do this if you're using a digital audio device, if you're using traditional speakers that use traditional audio jacks you don't even need to make a config file, you can just use raw alsa as it comes.

              As for configuring your input and output devices, well, we do say ALSA is a mess, and this is why (there is no good way to configure your default devices, alsa wants them to be system defined, and don't get me started on multi-channel support (although luckily in the above example, the dmixing PCM handles multi-channel support for you)
              Last edited by rabcor; 16 April 2015, 04:17 PM.

              Comment


              • #27
                Originally posted by rabcor View Post
                Sure you can record sound output from alsa (?)

                Code:
                ffmpeg -f alsa -ac 2 -i plughw:0,0 -async 1 -f x11grab -s 1920x1080 -i 0:0 -c:a libmp3lame -b:a 384k -c:v libx264 -b:v 4000k -profile:v high444 -r 30 -preset ultrafast output.mkv
                If I recall correctly the command might not be complete yet, but should mostly work. I can't test it right now since it seems ffmpeg isn't fond of recording when I use enlightenment. Of course you also have to replace the plughw:0,0 with the correct card/device numbers for your sound card (0,0 is the default usually)
                Thanks rabcor, but that command fails with "device is busy". Now that I see this "plughw" thing I remember reading that recording the output was only possible with some sound chipsets.

                Comment


                • #28
                  Originally posted by stqn View Post
                  Thanks rabcor, but that command fails with "device is busy". Now that I see this "plughw" thing I remember reading that recording the output was only possible with some sound chipsets.
                  Hmm don't think that's the case. Heres a thread with a similar problem, seems someone got around it by using "hw:0,0" instead of "plughw:0,0" try that.

                  But even if that fails, I think there is a way to create a dumym device (think they call it loopback or something) which all souind that is output to your speakers is also forwarded to, and that you could direct ffmpeg to that instead of the physical device. You could also of course always just record the video with ffmpeg or something and audio with audacity, then open the files in an editor and merge them so that the sound and video are in sync.
                  Last edited by rabcor; 16 April 2015, 05:56 PM.

                  Comment


                  • #29
                    Originally posted by rabcor View Post
                    Hmm don't think that's the case. Heres a thread with a similar problem, seems someone got around it by using "hw:0,0" instead of "plughw:0,0" try that.

                    But even if that fails, I think there is a way to create a dumym device (think they call it loopback or something) which all souind that is output to your speakers is also forwarded to, and that you could direct ffmpeg to that instead of the physical device. You could also of course always just record the video with ffmpeg or something and audio with audacity, then open the files in an editor and merge them so that the sound and video are in sync.
                    I saw in Audacity that my card is hw:0,2 so I tried that and while the error message is no more and ffmpeg records the video, it has no sound. And I can?t record with audacity either, I can only record "Front Mic", "Rear Mic" or "Line".

                    Comment


                    • #30
                      Originally posted by stqn View Post
                      - disable this fucking jack sense thing that cuts audio output to the speakers when headphones are plugged in.
                      That may not be possible in software... depending on how it's been implemented, the whole thing could be handled in hardware, with the kernel sound system having no idea that this behaviour is happening.

                      It's like radio kill-switches on laptops - sometimes it's a soft switch that tells software to turn off the radio, sometimes it's a hard switch that causes the network device to vanish.

                      Comment

                      Working...
                      X