Announcement

Collapse
No announcement yet.

Ubuntu Desktop To Drop PowerPC Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Comparing to other distributions

    Originally posted by DanL View Post
    Nope, I didn't say that or even imply it. Again, I provided a counter-example to the statement - "Ubuntu never contributes upstream". Now you've changed your claim to something far more nebulous.

    Well, if you want another example (though I know you'll never be satisfied no matter how many examples you see), here are recent commits by Bryce Harrington to Xserver: http://cgit.freedesktop.org/xorg/xse...r&q=harrington
    I've also seen upstream developers use Canonical tools (like Intel's Chris Wilson using Launchpad and the xdiagnose script): https://bugs.launchpad.net/ubuntu/+s...l/+bug/1203273

    How many more bluebirds do you need to see?
    My statement remains valid as long as you can't identify Bryce Harrington as a main representative of the Ubuntu operating system. It's that simple.
    I see he commits some stuff to XServer, but who knows if these commits were part of Canonical's business strategy.

    I'm sure they have a policy which allows developers to fix bugs when they encounter them. Judging from the size and type of these commits, there is no indication for a real direction.

    If you value those petty bugfixes as real contributions, then you're right.

    But look at other distributions: Gentoo developers for example work at their own devfs in userspace (eudev), Debian has an own Linux-Kernel team and Red Hat is the largest contributor to Xorg.
    It is an insult to every one of them when somebody like you tries to bring Ubuntu on par with them.

    The Ubuntu-developers may not "never" contribute upstream, but compared to other distributions, it's a bloody joke.

    And you know that.


    BTW: Stop being so cocky. Take this as a friendly advice.

    Comment


    • #42
      Originally posted by DanL View Post
      What truth? The truth that Canonical doesn't contribute as much to gnome as Red Hat? They still contributed something, which disproves the statement that I originally objected to. Thanks for bolstering my argument (and finding another bluebird for me).
      Have it your way

      Subscribe now for more Family Guy clips: http://fox.tv/SubscribeFOXWatch Family Guy on Hulu: https://www.hulu.com/series/3c3c0f8b-7366-4d15-88ab-18050285978e...

      Comment


      • #43
        Originally posted by frign View Post
        Gentoo developers for example work at their own devfs in userspace (eudev)
        eudev is fork and isn't Gentoo the only distribution intending to use it? How is that contributing to upstream, and not another case of destroying the world peace?

        Comment


        • #44
          Wrong

          Originally posted by AJenbo View Post
          eudev is fork and isn't Gentoo the only distribution intending to use it? How is that contributing to upstream, and not another case of destroying the world peace?
          Eudev is not limited to one distribution and the developers (including myself) don't intend this. Get your facts straight.

          Also, your definition of world peace is twisted.

          Comment


          • #45
            Originally posted by frign View Post
            Eudev is not limited to one distribution and the developers (including myself) don't intend this. Get your facts straight.
            Don't get you're panties all in a bunch. I know that they intend for it to be distribution agnostic. What I clearly stated was that no other distribution had shown interest in using it. And by forking they are not contributing to upstream.

            Comment


            • #46
              Forking

              Originally posted by AJenbo View Post
              Don't get you're panties all in a bunch. I know that they intend for it to be distribution agnostic. What I clearly stated was that no other distribution had shown interest in using it. And by forking they are not contributing to upstream.
              If you don't have good proverbs, don't use them.

              Sometimes, forking is a good way to contribute to upstream, especially when the forked project goes in a bad direction (Xonotic, Mage+, LibreOffice, ...).
              The problem with udev is that they are heading to be systemd-specific. The lack of interest by most distributions is derived from the fact that they are using systemd anyway.
              As I don't consider init-system-specific solutions to be ideal, forking is a valuable contribution to the project itself.
              Gentoo is specifically interested in eudev, because it is one of the few distributions which allow you to use multiple init-systems.

              Please read into the topic before giving unqualified statements.

              Comment


              • #47
                Originally posted by frign View Post
                My statement remains valid as long as you can't identify Bryce Harrington as a main representative of the Ubuntu operating system.
                Wow, you're really grasping at straws. What/who do you consider a "main representative" of Canonical then? I'm dying to hear this...

                But look at other distributions: Gentoo developers for example work at their own devfs in userspace (eudev)
                As pointed out, if other distros don't care about it, then it's ultimately not upstream contribution, and it's no different than Mir or Upstart.

                BTW: Stop being so cocky.
                Likewise...

                Comment


                • #48
                  Originally posted by dee. View Post
                  No, it wouldn't be very cool. We had that situation back in the 80s and early 90s, with a plethora of computer platforms, all incompatible with each other.
                  Yeah, we only need one single architecture for everywhere! x86_64 everywhere for ever!!!!

                  ...uh... wait... How come then that the currently most selling hardware platfrom (ARM) aren't even x86 compatible?

                  If we follow your logic, any computing device should only be a x86_64 variant running some custom Windows built.


                  Well the situation has changed a lot since the 80s (and early 90s). Back them every single computer platform, had a completely different set of innards, *BUT ALSO* each one ran a completely different set of operating systems and software, all almost completely hand-written assembler for the peculiar type of CPU in that machine and optimized for its hardware quirks.

                  Today, thanks to opensource (with source available everywhere and most of the software being written cross-platform using common languages), getting Linux running on anything is usually mostly only a compile away.
                  Linux runs on x86 & x86_64 (most popular on desktop & laptops), but also on ARM (most popular on smartphones/tablet/ultra-light-netbooks), but also on MIPS (very popular on router/modems) but also on PowerPC (Playstation, some servers) and several other platforms (other server CPUs like Sparc, etc.).

                  Not only that, but more and more software doesn't even care what the CPU is:
                  - software written in scripting language (HTML5/CSS/Javascript is imensly popular)
                  - software compiled into bytecode (Android is built around a Java-like Dalvik).
                  - Even Windows 8 (though not opensource): since they started offering also ARM platforms, they strongly recommand and support cross platform application, either in HTML5 or compiled into .NET bytecode.
                  etc.

                  In short, in modern days, architecture don't impact that much. You'll still get your Linux flavour for that one. (You already have x86_64, ARM and MIPS which are *very* widespread).
                  Whereas in the old days, it wasn't only Z80 vs 6502 vs 8088/x86 vs 68000, but also MS-DOS vs. CP-M vs. AMOS vs. STOS vs. C64 BIOS, etc. all with a bunch of different hardware to directly talk to.

                  Originally posted by dee. View Post
                  It would be cool if there were a common open-source CPU architecture that all CPU manufacturers could use without licensing or royalties or patent threats. Every computer could be using the same arch. Also, I'd like to win the lottery and live forever, as long as we're wishing...
                  Hello, please say "welcome" to "Leon", a SPARC based CPU whose VHDL has a LGPL license. Sun themselves have also released a few cores under an opensource license as OpenSPARC. There's also the OpenRISC.

                  There are opencores out there. What is needed is a whole market for them. (Not much interest beyond academia, for now).

                  Originally posted by duby229 View Post
                  The bottom line is, if PA is using the Alsa interface incorrectly then it is a problem PA. This "pass the buck" crap needs to stop. Alsa works just perfectly without PA. It's PA that doesnt work.
                  Well the problem isn't that Pulse is abusing Alsa in incorrect way. The problem is that you can access audio hardware in several different ways.

                  You can play stuff one-shot:
                  You can load a small audio file and simply tell the sound chip to play it. (That's what happens when a desktop applications plays a small sound effect).

                  You can also do something which looks like double buffering:
                  You fill a buffer with audio, send that buffer to the sound card, then while it is playing, fill another buffer, the when the card has finished the previous one send that current one to play, then proceed to the next buffer, etc. (That's what happens when you play a long audio sequence: a big MP3 file, stream a web radio, or mixing several sound source with a software mixer).
                  Note that this kind of buffereing requires making compromise between un-interrupted autio playing and latency. Either you use BIG buffer (with each 1s worth of sound in them) and the chance is very low that the playing will get interrupted (you alsways have a few 1sec buffers of headroom before reaching the point when you have nothing left to play), but have a huge latency (if an app want to play a sound effect, it will only be added in the next buffer being processed and thus will only be heard in a few seconds, once the previous buffer already in line have finished playing). Or you have the opposite of this (CPU usage is high as it is constantly filling small buffers, and sound glitches have a high risk of happenning in case of bad scheduling of threads, but at least since the buffers are small, latency isn't that bad).

                  You can also do something which looks like a ring buffer:
                  audio is continuously looping over the same buffer, pulse audio is filling this buffer continuously with a varied amount of "ahead" time. If you're listening to a radio, pulse will completely fill all the the buffer ahead, then put the CPU to sleep, then wake up a half second later and append half a second-worth of audio the go back to sleep. At any point of time there's a lot of headroom between which part of the buffer are playing and which part are getting filled). If suddenly an immediate sound is needed, with low latency, pulse will start re-writing the buffer which a few samples ahead of the "pointer" where sound is read. Pulse is only slightly ahead and finishes write audio, almost before it's getting played. Pulse is almost feeding the audio real time as it is played. The latency is minimal (though the CPU usage gets higher but only during this time).
                  Thus unlike the previous solution, you don't need to make compromises. Pulse is constantly tuning it self by varying how much ahead of the currently played sample it is in the circularly playing buffer.

                  This last one is a perfectly "normal" mode of work. The problem is: Pulse is the only piece of software that works this way under Linux. Every other sound system use exclusively one of the first 2 methods.

                  So even if this mode is "supposed to work", you might find bugs in the audio driver that pulse is the only one to hit because it's the only software functioning that way. You though that alsa is functioning perfectly, whereas actually it is not. Alsa is buggy, but it happens so that only pulse finds the bug. Or maybe the driver is technically correct, but your piece of hardware is half broken. It works under windows because it's driver is accordingly twisted to adapt to the quircks of the hardware, but it doesn't under Linux because the circumventions are known/are there. Except that, the weirdness only happens with Pulse. Probably that either the other modes of play where fixed before because people noticed the problem, or the problems only arises when circular buffer is used and nobody noticed until pulse.
                  In the end, there are needed fixed that should go in Alsa, but aren't there. Pulse can't do much (no matter what the programmers of pulse do, they are stuck. There nothing you can do, if the underneath ALSA stack can't correctly return the "currently playing" pointer).

                  Now the thing is, this feature (low latency when playing realtime sound, or conversly the ability to put the CPU to sleep and save power when it's just predictable audio playing) is actually important. Low latency is really important is several end-user scenario (mostly for VoIP calls, and for games), and keeping CPU usage low is important (while playing music, specially if the device in question is portable and runs on a batterie).
                  What Pulse tries to do is to emulate the functioning of a hardware mixer (mixes sound with very low latency and low power). The problem, is that such mixer are getting rarer: the current tendency is to just put a chip that is basically a multi-channel duplex DAC/ADC and do everything in software. (How many people in this thead have Audigy sound card with hwmix vs. how many people are just using their on-board "Intel HDA" chip ?) So you can't pretty much run away from pulse, it's the only viable way to have sound in games, skype and webradios.

                  But... as with any newer technology, it will require testing, fixing broken drivers, circumventing broken hardware, etc.

                  Originally posted by Vim_User View Post
                  Have you tried different distros (you know that live media are available for most distros?) to rule out if your problems are really caused by PA in general and not only by the PA used in your distribution?
                  I agree.

                  There are good distros making a decent job when packaging pulse (my opensuse seems to be one). There are also very bad distro which tend to think things along the line of "hey, pulse version 0.0.1-prealpha is out! Let's put it as an obligatory requirement!".

                  That's the behaviour which is bringing problems to pulse (which would otherwise be an useful piece of technology). Same thing happened also with KDE4 (with several distro switching to the "technological preview" without much thinking). And the same will very likely happen in the future with Wayland, with on one side distro taking great pain to make sure that provide a well intergrated preview experience and also provide a decent fall back for users prefering to wait. And you'll have probably a bunch of distro just throwing in whatever is the current version deemed releasable (you'll find KDE5 preview running on Wayland beta and the whole things crashing like it is windows 9x).

                  Comment


                  • #49
                    Representation

                    Originally posted by DanL View Post
                    Wow, you're really grasping at straws. What/who do you consider a "main representative" of Canonical then? I'm dying to hear this...
                    There's no definite representative developer for Ubuntu, but I would consider the majority of developers a representative amount.
                    One developer alone is definitely not enough, as you may understand.

                    Comment


                    • #50
                      Originally posted by duby229 View Post
                      So? PA needs to die. Just because this guy works for canonical doesnt mean jack shit. PA is still buggy ass broken nonfunctional bloatware. Personally I'm sick and tired of distro's, Fedora and Ubuntu especially, pushing buggy ass nonfunctional bloatware on us and trying there very hardest to convince us that it is the "only" way.
                      Oh yes. I just discovered that Pulseaudio 4.0 includes a patch to the resampling code that is literally incomplete and caused crackling and all sorts of other problems. (Obviously incomplete, even!) The patch has since been reverted and this will presumably be included in the next release - along with a bunch of new regressions and bugs because they never actually stop changing things for long enough to stabilise their code. More annoyingly, I'm not sure it was even the source of the crashes I was seeing in the resampler.

                      Comment

                      Working...
                      X