its also important to note that context switches do not by themselves increase latency. they do reduce the number of cpu cycles available for processing audio before a deadline is missed. since in most situations, there are ample cpu cycles available, this only becomes important when loading up the cores with huge amounts of very expensive (typically pro-audio) processing (reverbs, for example, tend to be very expensive). even then, the cost of a context switch these days is mostly related to the size of the working set of the switched-to thread, so if you have an audio processing thread that doesn't touch much data, the overhead of a switch is very, very, very small. much smaller than would be incurred by many typical audio processing operations. unless you plan to chain up dozens of threads with context switches, the overhead is really small, which means that the impact on lowest achievable latency is small.
what *is* important about context switches is that they provide an opportunity for the kernel to get scheduling decisions wrong. this is a real problem, though its getting better all the time.
It seems to have worked just fine for the legions of games based on DirectSound on Windows. but that aside, i think it would be fantastic for ALSA to provide CoreAudio-like access to the h/w buffer used by most audio interfaces these days. That requires adding a DLL/PLL to estimate the position where the hardware is currently reading/writing, which ALSA does not have at this time. If it did, it would be possible to use the high res timer interrupt to provide arbitrary latency for different applications (ie. latency would not be based on some multiple of the audio interface interrupt interval, as it is now) AND it would be possible for read/write calls (i.e. a push model API) to get very very close to the current read/write location. Adding this to ALSA would be a MAJOR contribution to low level linux audio, but is also a lot of work. It does not, however, require tearing up the infrastructure and device drivers that we already have.Also about the push/pull design, IMHO (I'm not an audio programmer) both are important and having low latency push (with low CPU usage) is important too: I'm thinking about a game which wants to have fast audio feedback when a player press a button, so I'm not sure that a push buffer over a pull mechanism is "good enough" for this use case.
Hello, I´m not an engineer, but I have serious doubts in your viewpoint.
But I have important sidenote : floating comes from analogue world, dusty world of warm lamp glow. Analogue, sans disadvantages of low lifespan due to instability to "dust", has two serious advantages over digital:
1* ability to store much more information per state
2* "softness", which comes from (1)
This is pure analog signal, which lost to digital signal, as digital processing evolved and could take over digital's major drawback - low information density.
The "float" that we currently have is not original "analog", but fake analog form, that is injected into digital form. This is why, it, contrary to original analog, has stability, but makes up for it with its "mantissa".
I, personally, look at this "fake analog" as a form of smart digital integer value, an integer value that has added built-in position compression. At cost of a fraction of bit capacity.
Which means more place for mantissa, more precision can be saved. BUT, "fake analog" is always precise, unlike what you claim, because it roots on the digital form.
As such "fake analog", aka floating point, has following true advantages over integer:
- can compress insignificant part of value. Example 2.34+E30 will fit 32bit floating point FINE, unlike int, that will instantly overfollow unless constantly asserted. So, floating point is operating much more flexible.
- can split(divide) values much more accurate. Example 75/6. Only floating point will deliver precise value.
The problem which "fake analog" aka float introduces:
- division operations which result in "unendless" value, cut by mantissa will be imprecise at tail. In case of int, there will be simply rounded,cut, inaccurate value. Example 1/3. This comes from nature of "fake analog" - its digital base.
The advantages of integer over "fake analog" aka float:
- you have correctly noted, that when occupying whole bit capacity, integer value stores more data than floating point, because in this case only mantissa matters - in int, the whole value is "mantissa".
- the value is delivered either complete, or broken upon "sharp digital bricks", aka mod value.
The problem with large numbers which you introduced, will be discussed lateron:..
However, when the range will be outside of bit capacity, int will fail by clipping (for high-end) or by end-cutting (for low-end) - but float will deliver accurate result with acceptable error rate.
Again, if input value has higher bit capacity requirements than those available, both int and float will deliver inaccurate results, but that of the float is actually USABLE in audio.
1) have significant part that fits inside float mantissa
2) have insignificant part, which precision does not matter AND that which folds within capacity of float exponent space
This is very sleek case.
The real applications of integer vs float come depend upon the source of values and their significance.
Values which have only one significant part, with unimportant rest - are better processed (lossy-compressed) by float.
Values which can grow huge, and are all significant on every position - are unsuitable for all types. Here, they should be fragmented into packets and stored in integer.
Values which have specific length and are all significant - are better processed by int. Because floats unused exponental part will only waste bit bandwith.
Values which have specific length and but which can or will be divided into unendless range - are better processed by float. And this is AUDIO. See:
If you implement an audio system using
The float system will loose only in case the significant part grows so large, it overfollows (does not fit) the mantissa. Even then, you will get acceptable for audio signal clip.
But the int (a) system will loose:
a) on sheer processing size, float is way more compact. Means more CPU time for integer-based version. Also, float is way more flexible, it saves more detail when it happens - with int you will have "unused bits" much more often.
b) on sound precision, because you can't divide 75/6 correctly using int. You will end-up reinventing a cycle: same bit-shifting mechanism that is already present in float. Means more clips and inaccurate sound processing using int.
c) for sound streams which luckily pass ideally within both approaches, no system will have advantage. As this float is really a "fake analog", and all bits will be divided just as correctly as with int.
So, my conclusion, as of non-professional non-engineer, in audio processing float brings more advantages. As if walking on hands is possible, but walking on feet is more efficient.
I've always been under the impression that Floating point is more accurately described as 'scientific notation' in computing, and the benefit FP being floating point values have advantages over both integer and fixed-point calculations because they are much more 'granular', and thus can hold a much wider range of values.
anyway, i just thought it was silly that you keep refering to FP as 'fake analog' in your post, which doesn't even really come across as a layman term, but rather just seems like it isn't a term that should've been used to begin with.
i agree with datenwolf that mixing audio with floating point is completely unnecessary. after all the DAC's on audio cards can handle only a fixed number of bits.
besides, aren't there SSE instructions for integers also? if so, would they be allowed in the kernel?
The point that I had, is that the float value inside digital machines is not as chaotic, noisy or unpredictable as datenwolf has seen it. Its pretty discrete.
So, in my opinion, the only pro-argument for KLANG was/is: integer processing takes less resources - which I think is also incorrect, because we have floating point processing modules and specific instructions.
I really don't think all this has any value for Linux or its users.. It looks like a fight between misunderstandings and good vs better.
However, making something integrated, like CoreAudio, from ALSA+JACK+Pulse, while staying flexible -- would be a good thing for KLANG to do. Yet I don't think its possible without cooperation of whole scene. Which in turn means, if datenwolf values truth over personal opinion, lets hope he will cooperate instead of sack punching in the basement. Because if even if he succeeds, the result will just tear the system more apart, instead of evolution. Paul has typed pretty a lot of interesting info here, so he (Paul) does not seem to me he'll refuse to cooperate...
(divide four packed single-precision floating-point values(4x32bit floats packed in one 128bit register)
DIV opcode says 39
if int needs to be converted to float first, that takes 4 to 14 cycles(and back)
int can be easily divided by 2,4,8 etc using shifts(fast)
problem with computer audio processing is that it must never stall, that means a buffer, that means memory access, that means latencies
PS compilers are better at making code for normal int's then sse
Last edited by gens; 08-04-2012 at 11:19 PM.
oh, and AVX2 will have support for integers(floats too ofc), meaning the sse edge that floats have will disappear
8 channels computed at once...
most modern sound cards also have dedicated IC's for most of that audio stuff
To me KLANG is just a really bad idea - if you want to improve the Linux audio stack, work on ALSA, PA and/or Jack ... OSS sucks and we don't need to revive it under some new name (klang) with some added features, while then trying to convince every developer to port his/her applications to it. ALSA has better hardware support than OSS (or KLANG) so unless DW plans to port and maintain every driver from ALSA - this just seems like a dumb idea, regardless of any argument of Float vs. fixed-point ...