Announcement

Collapse
No announcement yet.

The KDE vs. GNOME Schism In Free Software

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by V!NCENT View Post
    This will be legendary (and historical) because I'm going to defend BlackStar (come to think that that day would ever come ).
    Oh hi, V!GNORANT. Nice to see you again. Unfortunately, just like the last time we got into a spat - you don't really have anything insightful or relevant to say, and here is why;

    Originally posted by V!NCENT View Post
    @Ninez,

    If your brain can't adjust to 20ms delay, you need a new one.

    -Vincent
    Ya, musicians just love a delay when playing instruments, and/or recording them. Same goes with recording voice. In case, you can't tell - that is sarcasm. It's not a matter of adjusting to the delay, it comes down to precision and accurate recordings.

    to quote myself;

    You do realize that a real piano is somewhere around 8ms, and most analog keyboards are lower than that too, right?
    and

    cceptable latency under different workloads... Simple, using a midi keyboard and having accurate sounding audio - 20ms is too high. You've already said the opposite. (which beyond a shadow of a doubt, illustrates that you don't know what you talking about) I've already explained this - you should have no more than 8ms, if you actually intend to be playing it.... Recording (live instruments, midi) - requires low-latency, for the 'capture' to be accurate. (otherwise with midi, you might as well be step-sequencing, or having to quantize your playing - losing all actual 'natural' feel.
    I guess you missed that eh?

    So, it really doesn't come down to whether or not i can adjust to 20ms delay or not. It isn't suitable. When recording or playing - you want the LEAST amount of latency, and not to have to adjust... especially, since you have to consider that even if you could adjust (which you are correct, adjusting to 20ms isn't hard for the brain to do) that is most certainly not the case, when playing with other musicians and/or recorded parts, especially when recording - you want accuracy, and latency interferes with that. furthermore, try playing a fast piece of music on a keyboard with higher-latency, listen as all of your triplets and rolls sound like 'shoes in the dryer' - and it is literally - a no brainer to realize that, if you work with audio/midi.

    So V!GRNORANT, you jumping into this argument is completely pointless. You just brought a completely pointless argument, that doesn't even apply, and really doesn't even make sense. Furthermore, there really isn't any argument to be made.

    Not surprising though

    bye bye
    Last edited by ninez; 25 October 2011, 02:44 PM.

    Comment


    • #82
      @Ninez,

      /***************************\
      First point
      \***************************/
      Music 'workstations' are all digital and thus electronic music.

      Electronic music these days are no longer 8bit bleeps and booms; it's sounds, vocals, strings, entire studio performances, etc. being recorded and edited/pasted in a music editing program and/or collection of programs thereof.

      Any proffesional (including you, which you are, right?) knows that.

      Proffesional keyboards have speakers. They record over midi, which doesn't require playback through the workstation speakers, while being recorded.

      These programs have (endless amounts of) timelines. Any delay below 21ms playback of being also seen on the timeline(s) is absolute bullshit.


      /***************************\
      Second point
      \***************************/
      Now the brain raplacement part.

      Any human can deal with delays. Saying otherwise is retarded. I only need to give you the wireless mouse example and we're effectively done talking on this one.


      /***************************\
      New points
      \***************************/
      Callings names is pathetic, and I hope you don't do this in your profesional environment. So stop being so delusional to be 100% right, just because you have some limited experience in audio reps. That's seriously pathetic, man.

      I know BlackStar is a troll at times, and maybe he is now, but he is right. I'd say back up your shit, or calm down. Maybe you should do both.

      What's funny is that the definition of ignorance comes from ignoring due to self-rightious thinking because of some experience/titel. I guess that's you.

      Comment


      • #83

        Comment


        • #84
          Originally posted by V!NCENT View Post
          @Ninez,

          /***************************\
          First point
          \***************************/
          Music 'workstations' are all digital and thus electronic music.
          Depends on what you are referring to as 'electronic music'. It's digitally recorded, yes - but 'eletronic music' - typically/historically, refers to a genre of music. Not the fact it was recorded digitally. But really, what relevance does this even have?

          Originally posted by V!NCENT View Post
          Electronic music these days are no longer 8bit bleeps and booms; it's sounds, vocals, strings, entire studio performances, etc. being recorded and edited/

          Any proffesional (including you, which you are, right?) knows that.
          You are speaking of a particular set of workflows (vaguely), and it would seem you are also speaking about a specific genre(and sub-genres), AFAIK people record all sorts of music digitally - the flow of the Ableton crowd, hiphop crowd, techno crowd, etc - isn't *universal*. again, what relevance does this have (none).

          You are making a grand assumption that all music is recorded the exact same way, but AFAIK its NOT. ie: you don't record a jazz/rock/blues/live band using sample banks, chopped up, etc. So the conclusion you are drawing is retarded. That being said, a live band may very well have some keyboard tracks recorded with midi, or have massive samplebanks, but that still doesn't make it universal. Thus, your argument is pointless /null and void.

          Originally posted by V!NCENT View Post
          Proffesional keyboards have speakers. They record over midi, which doesn't require playback through the workstation speakers, while being recorded.
          some do, some don't.(if you meant actual speakers)... outputs? yes - as long as they have a sound module, and aren't just a midi-controller... ...and NO - keyboards aren't always going to be recorded using midi, sometimes you may own a keyboard that soundbanks better than VSTs, and will be inputting it directly. Especially the really high-end stuff - so you are again, incorrect in your assumptions..

          Originally posted by V!NCENT View Post
          These programs have (endless amounts of) timelines. Any delay below 21ms playback of being also seen on the timeline(s) is absolute bullshit.
          Mostly correct - depending on BPM, time-signature. (if it is heavily syncipated and at a fast BPM, then no.) And it still doesn't change the fact, that for recording you want low-latency, and that 20ms is high latency for playing, and the human ear can hear it, and that is NOT ideal, or particularily suitable - unless you're a hack, and can't actually play keyboard very well, and just need to capture some chords, a bass line - inwhich case, you aren't going to be affected as much, cause you're more of a producer than performer (so there is no natural feeling there anyway).

          furthermore, my example of triplets - still stands. I have seen this Protools, Logic9, Ableton live, Ardour3, oomidi, renoise - all fail at being able to capture fast playing. Quantization exists for a reason. that should be pretty obvious.

          Originally posted by V!NCENT View Post
          /***************************\
          Second point
          \***************************/
          Now the brain raplacement part.

          Any human can deal with delays. Saying otherwise is retarded. I only need to give you the wireless mouse example and we're effectively done talking on this one.
          Wireless mouse vs. live instruments - that is retarded... LOL - you just officailly lost that one, ya - you're right we are effectively done on this one!

          That is a terrible example. Using wireless or bluetooth creates massive latency, it might be suitable for point and clicking on your desktop - but if you think it is acceptable for playing music - you would be 100% wrong. How do i know this?

          well, i've used both iPad with a computer/in Logic and i have also used Wiimote - including the wiidrums.

          Wireless, is not suitable. You will be able to compinsate for 1/8 and 1/16 notes, anything beyond that will need to be quantized, as it will be inaccurate!

          Originally posted by V!NCENT View Post
          /***************************\
          New points
          \***************************/
          Callings names is pathetic, and I hope you don't do this in your profesional environment. So stop being so delusional to be 100% right, just because you have some limited experience in audio reps. That's seriously pathetic, man.
          sorry, if you don't like that i've mangled you handle, but i could care less. I also could care less if you think that i am pathetic or not. I don't need to be 100% right. But i have no problem, pointed your idiocy (althoughy, i have restrained myself, most times - even though the majority of your other posts are retarded).

          Clearly, i have more audio experience than either of you, so ya, i can contest something you say. If i am in a restuarant and someone starts to have a medical problem or starts to choke, i want the off-duty doctor, or emergency worker to handle it - not someone asshole who wants to be a hero. So, yes - knowledge is important.

          what is pathetic is you, coming in here to defend BLacktard, when it's obvious, you just have beef with me over the last time. How transparent, and funny.

          Originally posted by V!NCENT View Post
          I know BlackStar is a troll at times, and maybe he is now, but he is right. I'd say back up your shit, or calm down. Maybe you should do both.
          I am unbelievably calm, i find this all funny. Blacktard was wrong on almost all accounts, and your latest incarnation of dribble, is also either not relevant, or simply incorrect.

          Originally posted by V!NCENT View Post
          What's funny is that the definition of ignorance comes from ignoring due to self-rightious thinking because of some experience/titel. I guess that's you.
          you can think i am ignorant all you like. That doesn't mean much coming from you V!GNORANT (there was a reason, i called you that months ago, it still stands today!)

          But, you know what is even more ignorant than someone having limited experience, and being rightous???

          someone who has zero (at even less experience) and gets all rightous, while not really having a clue.

          *cough* - both of you

          Comment


          • #85
            thanks Liam

            For those who don't understand what is being said there, and why it is important. higher latency = more jitter ... more jitter = not capturing your performance accurately.

            cheerz

            Comment


            • #86
              You seem to have zero knowledge of recording to let's attack two issues first:
              -Live band;
              -Band.

              Live bands play live, which is obvious, and record with microphones.

              Bands record in the following order:
              1. Drums first, to set the fundament of the track/song, with microphones;
              2. Guitars/string instruments with microphones for analog and wires for electronic;
              3. Some optional random instruments, live, and wire for midi;
              4. Singing/screaming/etc..

              Realy in that order. Later even autotune.

              Remember, this is billion dollar record label stuff.

              Now onto the electronic music stuff.

              All (relevant) electronic music today isn't electronic anymore in the sense that only the harder styles of dance music (Hardhouse, Hardstyle, dubstep, DnB, Hardcore, etc.) still use synthesizers and these don't have a fixed BPM anymore so argument about fixed BPM is gone. Then the argument about midi (LOL) is gone since it's either computer keyboards (and I do know a very respected Dubstep producer that produces for some well paid dj's, who uses his laptop keyboard) and USB keyboard-plugins for Qbase, etc. That is also gone. Come at me with another joke that I will stump into the ground.

              Other techno music is made with real sounds. I'm talking about minimal Techno, Tech-House, House and minimal combinations, all use sounds nowadays that again have recorded sounds instead of synths. Where's the midi? You tell me.

              Ambient/Aphex Twin stuff en electro is also a no-brainer. If Aphex Twin can have latency, everone can, because he's the Dennis Ritchie of the techno.

              Then Jazz. The real deals play life; all mics. Nuff said.

              I don't know what sales you do, but please visit a real recording studio and you'll see who's ignorant here.

              Comment


              • #87
                Originally posted by ninez View Post
                thanks Liam

                For those who don't understand what is being said there, and why it is important. higher latency = more jitter ... more jitter = not capturing your performance accurately.

                cheerz
                Well, jitter is definitely bad but the specific point was that consistency, not latency, is what matters.
                I'm not musician, BTW, I'm just interested in realtime linux solutions and recording/editing is a popular area for such things, but I don't see why you'd need the Ingo branch for audio since 1ms latencies are, according to the given thread and article, beneath what the sound cards can truly deal with vs. what they report. I suppose the issue is the scheduler, really, but I could see using cgroups in combination with the preempt kernel to get sufficient latencies, but I haven't tried it. Can anyone with experience chime in here?

                Comment


                • #88
                  Originally posted by V!NCENT View Post
                  You seem to have zero knowledge of recording to let's attack two issues first:
                  -Live band;
                  -Band.

                  Live bands play live, which is obvious, and record with microphones.
                  I suppose you've never seen a band that is live, using partially electronics have you?

                  and midi is used more commonly than you think (beefing up drums using triggers, as one small example).

                  Originally posted by V!NCENT View Post
                  Bands record in the following order:
                  1. Drums first, to set the fundament of the track/song, with microphones;
                  2. Guitars/string instruments with microphones for analog and wires for electronic;
                  3. Some optional random instruments, live, and wire for midi;
                  4. Singing/screaming/etc..

                  Realy in that order. Later even autotune.

                  Remember, this is billion dollar record label stuff.
                  Wow, thanks tips - like i wasn't aware of how that one works. Remember - autotune is cheap - not billion dollar stuff, nor is the recording workflow you are referring to. Any studio records bed-tracks first. You're not very enlightening, dude. It's quite the opposite.

                  Originally posted by V!NCENT View Post
                  Now onto the electronic music stuff.

                  All (relevant) electronic music today isn't electronic anymore in the sense that only the harder styles of dance music (Hardhouse, Hardstyle, dubstep, DnB, Hardcore, etc.) still use synthesizers and these don't have a fixed BPM anymore so argument about fixed BPM is gone.
                  who said anything about a fixed BPM??? everyone uses timing changes in music. to not do so, makes music feel rigid. Synthesizers and samplers will always be used in electronic music.

                  Originally posted by V!NCENT View Post
                  Then the argument about midi (LOL) is gone since it's either computer keyboards (and I do know a very respected Dubstep producer that produces for some well paid dj's, who uses his laptop keyboard) and USB keyboard-plugins for Qbase, etc. That is also gone. Come at me with another joke that I will stump into the ground.
                  So what, you know a very respected Dubstep producer. his workflow doesn't apply to all music being written and recorded. So, that is pretty much irrelevant. you haven't stomped anything into the ground, you're an idiot. Furthermore, i already pointed out that for producers, it's not as big of a deal - as they aren't playing / they are producing - there is a difference. one has to be accurate and be able to actually play, the other doesn't have to have any musical skill at all - aside from a good hear, and some producing skills..

                  Originally posted by V!NCENT View Post
                  Other techno music is made with real sounds. I'm talking about minimal Techno, Tech-House, House and minimal combinations, all use sounds nowadays that again have recorded sounds instead of synths. Where's the midi? You tell me.
                  some not, all. It's actually usually a combination of sampled live instruments and electronic. - and don't even argue that, as it is obvious and plain as day. Take the New Deal for example - they play live, but everything except drums was electronic (played live). Black Ghosts are like that too. I would say the vast majority of electronic music contains synths, but that is obvious listening to any record... furthermore, unless they are physically chopping everything by hand, and placing it in tracks - they are probably using midi and/or OSC. Most if not all DAWs - have midi-sequencers, and thus are using midi. dumbasss.

                  Originally posted by V!NCENT View Post
                  Ambient/Aphex Twin stuff en electro is also a no-brainer. If Aphex Twin can have latency, everone can, because he's the Dennis Ritchie of the techno.
                  I've seen Aphex twin 2 times - and i can tell you right now, he uses synths on stage / using midi, and sequencers, on top of a having a laptop. It was pretty clear, by the nordlead i saw on stage - he didn't have to worry about latency (as he was using it's soundbanks). Aphex also isn't always 'playing' as much as he is actually tweaking, sampling and triggering. (almost everything, aside from tweaking is pre-recoreded/sampled - so latency is not of concern, especially, not when tweaking knobs, or sending a midi-message to change sequences/patterns/parts.

                  and i guess it never has occured to you, that he to has used quantization many a' times - guarenteed! in fact, in the past it was even more necessary (slower hardware). So this argument is /dev/null. AphexTwin is more of a producer than a live musician.

                  Originally posted by V!NCENT View Post
                  Then Jazz. The real deals play life; all mics. Nuff said.
                  again, you are making a grand assumption. jazz is not restricted to just live instruments. This is the 21st century, musicians make use of modern technology. you're thinking is very rigid.

                  Originally posted by V!NCENT View Post
                  I don't know what sales you do, but please visit a real recording studio and you'll see who's ignorant here.
                  I've been in plenty of studios, thanks. I attended at Selkirk's Contemporary Music and Technology program (several studio facilities);



                  i've been to many other studios, and also have family in the music business, thank you very much... you don't know what you are talking about, period. and having some Dubstep producing friend, doesn't mean jack shit - you thinking it does is laughable.

                  do yourself a favor, and just shut up.

                  ..oh, and i don't work in Sales - i work in the IT department.
                  Last edited by ninez; 25 October 2011, 05:44 PM.

                  Comment


                  • #89
                    Originally posted by liam View Post
                    Well, jitter is definitely bad but the specific point was that consistency, not latency, is what matters.
                    I'm not musician, BTW, I'm just interested in realtime linux solutions and recording/editing is a popular area for such things, but I don't see why you'd need the Ingo branch for audio since 1ms latencies are, according to the given thread and article, beneath what the sound cards can truly deal with vs. what they report. I suppose the issue is the scheduler, really, but I could see using cgroups in combination with the preempt kernel to get sufficient latencies, but I haven't tried it. Can anyone with experience chime in here?
                    Mainline (vanilla) kernel is much better these days, than it used to be. but you have to be using preemption, and have force-irq-threading enabled. Forced-irq-threading was originally apart of the RT patchset, but was merged mainline in 2.6.39.... the BFS patchset on a modern linux kernel, also gives decent performance, but you must use it's 'isosynchronous' scheduling policy...

                    But with the FULL preemption of RT, you are going much better performance, and less of a chance of outside interference causing xruns. simply put, there are better guarantees that you task/audio stream/application/etc will meet it's deadlines. The technical stuff, i have read about here and there, over the years, but kernel development is not my strong point. I will say though, that RT converts many of the mainline kernels 'locks' into spinlocks, and takes the mainline spinlocks and turns them into atomic_spinlocks (not used too much by RT). There is alot of info available around the net, that could explain a lot of this stuff, in great detail, if that is what you're interested in;

                    here's a couple of links, the first one talks about concurrency in both Vanila kernels and RT-kernels



                    this is dated, but a lot of the info, still applies in concept;



                    basically, you can google RT linux, and you will turn up a lot of information. some old, some new. but as before, most of it should still apply.

                    Then there is also the RT user list;

                    [email protected]

                    I'm sure some people on there would answer some questions you may have. They are generally pretty helpful. But mostly on that list, you will see patches, and announcements. You might find it interesting though

                    cheerz

                    Comment


                    • #90
                      Originally posted by liam View Post
                      Well, jitter is definitely bad but the specific point was that consistency, not latency, is what matters.
                      quote from your posted article;

                      Most musicians can easily adjust to latencies even as high as 15ms, as long as they are reasonably consistent -- it's the jitter that tends to be more problematic, as this determines the amount of 'looseness'.
                      The lower the latency - in theory, less jitter.

                      So latency is very important.

                      Consistent playing is important (of course), but high latencies can interfere with that, and obviously so does jitter. Which is why having low latency is important.

                      forgot to write that in the last post.
                      Last edited by ninez; 25 October 2011, 07:20 PM.

                      Comment

                      Working...
                      X