Announcement

Collapse
No announcement yet.

What Are The Biggest Problems With Linux?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Alliancemd View Post
    I just launched KCalc under Kubuntu and it launches instantly... Btw, the GTK calculator is just awful. Press "." a few times... Now press "." in KCalc...
    I don't like when people lie just because they hate on something(in this case KDE or/and Qt).
    Calling it a lie is rather harsh. I think the biggest difference is which framework you have loaded and thus doesn't need to be

    Unfortunately I have only gcalctool available to me at the moment, so can't try the '.....' thing so not sure what you mean with that.

    Comment


    • My take

      I've been running desktop Linux since 2000 (almost exclusively KDE) and I have to say it has come a long way. However, the thing that really annoys me is the regressions that keeps popping up. From one version to the next you can be sure that something that used to work well is broken in some respect. KMail is a frequent candidate for breakage, every new version seems to fix a couple of bugs but then add a few others.

      I write it down to either not using quality tools and methods or not using them correctly. I've built KDE by hand a couple of times and I've discovered that only a small fraction of the code has unit tests, but even so a large part of those tests doesn't even run. What's the point of having unit tests if you don't care about the results?

      When it comes to the kernel, it is my opinion that Linux should go towards a micro-kernel architecture. Sure, there may be a small overhead here and there, but micro-kernels offer vastly superior stability, fault tolerance, security and scalability (core-wise). The current kernel having millions of lines of code running in supervisor mode is a recipe for disaster.

      Comment


      • Originally posted by Staffan View Post
        When it comes to the kernel, it is my opinion that Linux should go towards a micro-kernel architecture. Sure, there may be a small overhead here and there, but micro-kernels offer vastly superior stability, fault tolerance, security and scalability (core-wise). The current kernel having millions of lines of code running in supervisor mode is a recipe for disaster.
        It seems you don't know what you're talking about. There's no sane desktop OS that uses true micro-kernel. There's no micro-kernel in Windows, OS X, BSD, Solaris and so on. Don't even think about scalability with micro-kernel. As far the best kernels are the hybrid ones and the same is Linux. Switch to hurd if you like. You will notice a HUGE overhead. Minix is great example of this.

        Comment


        • Originally posted by dsmithhfx View Post
          Most complaints seem to revolve around:

          Why can't Gnu/Linux be more like Windows/Mac, and still be free?

          Why can't Gnu/Linux feature x stop evolving, or at least evolve the way I want it to, even though I never bother to tell anybody what I want until that ship has sailed, heck I don't even know what I want?

          Why doesn't Gnu/Linux work on MY hardware -- I paid good money for it! (the hardware, that is)

          ...ad nauseum.

          Unless you've got skin in the game (and yes, that can include cash contributions, constructive criticism, bug reports, and endeavoring to share knowledge with others), then you're only going to come across as selfish and spoiled.

          Or maybe you're just trolling?
          Couldn't say it better. The main Linux problems are... stupid winblows users.

          Comment


          • Originally posted by kraftman View Post
            It seems you don't know what you're talking about. There's no sane desktop OS that uses true micro-kernel. There's no micro-kernel in Windows, OS X, BSD, Solaris and so on. Don't even think about scalability with micro-kernel. As far the best kernels are the hybrid ones and the same is Linux. Switch to hurd if you like. You will notice a HUGE overhead. Minix is great example of this.
            I wrote "towards microkernel", which is where other systems are going.

            Minix is hardly comparable, it never was supposed to compete with commercial kernels. Minix is an educational tool to show students how a microkernel works, which is why it has a number of stupid but simple design choices such as fixed size arrays for system structures.

            Hurd has a tiny fraction of developer resources compared to Linux. It's therefore not surprising it's not too optimized.

            Comment


            • Originally posted by kraftman View Post
              It seems you don't know what you're talking about. There's no sane desktop OS that uses true micro-kernel. There's no micro-kernel in Windows, OS X, BSD, Solaris and so on. Don't even think about scalability with micro-kernel. As far the best kernels are the hybrid ones and the same is Linux. Switch to hurd if you like. You will notice a HUGE overhead. Minix is great example of this.
              There is ONX and it seems quite fast. Unfortunately, it's not open source.

              Comment


              • Originally posted by LightBit View Post
                There is ONX and it seems quite fast. Unfortunately, it's not open source.
                I think you mean QNX.
                I've never heard of ONX.

                Comment


                • Originally posted by uid313 View Post
                  I think you mean QNX.
                  I've never heard of ONX.
                  Yes, of course.

                  Comment


                  • Originally posted by Staffan View Post
                    I wrote "towards microkernel", which is where other systems are going.

                    Minix is hardly comparable, it never was supposed to compete with commercial kernels. Minix is an educational tool to show students how a microkernel works, which is why it has a number of stupid but simple design choices such as fixed size arrays for system structures.

                    Hurd has a tiny fraction of developer resources compared to Linux. It's therefore not surprising it's not too optimized.

                    It adds layers of abstraction in order to have a stable ABI. The user-space API is running in user-space, so that the underlying links can be changed, same thing in the hardware layer. To get something like that for linux which didn't suck would take a huge engineering effort, for very little payoff. Now it could let you carry allong legacy interfaces for backwards compatibility, but that would mean a bigger codebase, and it would be less competitive in the embedded space.

                    A true micro-kernel is more fault tolerant and secure, however it's not really scalable. To synchronize threads and locks across multiple layers of abstraction if very difficult The more services you stick out into userspace the more difficult it gets. Minux or the Hurd have difficulty using 10 cores effectivly, BSD can use a thousand, Linux can use 10,000.

                    Comment


                    • There are precisely THREE problems;
                      1) Unusable UI's (i.e., gnome-shell).
                      2) Commercial CRAP comes on all hardware sales.
                      3) Proprietary software is not built with Linux in mind.

                      (2) and (3) are obviously connected, as in (3) is a direct result of (2).
                      To some extent, (2) is a result of (1) as well.

                      Comment

                      Working...
                      X