Announcement

Collapse
No announcement yet.

Torvalds: User-Space File-Systems, Toys, Misguided People

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Djhg2000
    replied
    Originally posted by aht0 View Post
    Somehow that thread popped up as "recent".
    The thread was linked in a recent news post. I didn't realize it was an old thread myself until I got to the last page.

    Leave a comment:


  • cb88
    replied
    Originally posted by elanthis View Post
    the big two competing OSes are at least partially micro-kernels (OS X and WinNT) and perform just fine to get stuff done for the regular folk thank you very much. Also, Win7 crashes less than Linux. (Seriously. If your DRM driver in Linux crashes, your system is hosed. If your WDDM driver in Windows crashes, it just restarts, and even quite a few apps that use D3D directly can recover from that restart without a hitch. It's pretty awesome. I get waaaay more kernel oopses from Linux than I get blue screens from Windows... and I very very rarely get a Linux kernel oops.)
    This just isn't true. Mac OS and NT are both well known to be Hybrid Kernels which means a monolithic kernel with loadable modules and potentially userspace modules... as far as WDDM linux has nearly the same thing with libDRM and Mesa. And yes... graphics drivers can just restart on Linux. WDDM just enforces that the drivers implement all that nice stuff. Windows Blue screens is mostly hardware specific... if you get a poorly supported device it will bluescreen for days. My coworker had a laptop that his sound driver would bluescreen him at least 1-2 times a day untill he figured out the right update to install to fix it (a myriad of updates didn't work).

    Both Mac OS and NT lean much more heavily toward being modular kernels with loadable modules, than micro kernels to the point that most of the advantages are negated as far as microkernels go. The main advantage both gain from userspace drivers is that they are easier to write and maintain.

    Leave a comment:


  • aht0
    replied
    Originally posted by schmidtbag View Post
    First of all, you're quoting a 6-year-old post.
    Second, I never said or implied I OC in Linux and not Windows. I was just stating that overclocking is one of the only primary causes for instability in Linux, as it is with any OS. Linux isn't magically stable no matter what you do, so I wanted to clarify that I have in fact encountered stability issues with it, because of things scuh as OCing.
    Third, I never said or implied software quality correlates to hardware pushed past its limits.
    Originally posted by schmidtbag View Post
    Win 7 is definitely the most stable GUI OS made by MS, but I've found it crashed on me several times before, and I don't use it for anything except gaming and virtualization (and no, it hasn't crashed during a game or virtualizing). Linux only crashes on me when a program I develop goes wrong, overclocking too much, or faulty drivers.
    Just for the sake of argument.
    You are pretty much implying there that Windows would crash with or without overclock, with or without doing anything, be the drivers faulty or not... etc.

    And yeah, 6 years old. Last post was from 2 years a go. Somehow that thread popped up as "recent". I don't dig around. Sorry. I won't respond here any more.
    Last edited by aht0; 26 October 2017, 06:33 PM.

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by aht0 View Post
    I find it hard to believe that you are running your windows without overclock and only overclock for Linux. Overclock right there is a reason enough why your windows would crash. Software quality does not help a thing when your hardware has been pushed past it's designed limits.
    First of all, you're quoting a 6-year-old post.
    Second, I never said or implied I OC in Linux and not Windows. I was just stating that overclocking is one of the only primary causes for instability in Linux, as it is with any OS. Linux isn't magically stable no matter what you do, so I wanted to clarify that I have in fact encountered stability issues with it, because of things scuh as OCing.
    Third, I never said or implied software quality correlates to hardware pushed past its limits.

    Leave a comment:


  • aht0
    replied
    Originally posted by schmidtbag View Post
    Win 7 is definitely the most stable GUI OS made by MS, but I've found it crashed on me several times before, and I don't use it for anything except gaming and virtualization (and no, it hasn't crashed during a game or virtualizing). Linux only crashes on me when a program I develop goes wrong, overclocking too much, or faulty drivers.
    I find it hard to believe that you are running your windows without overclock and only overclock for Linux. Overclock right there is a reason enough why your windows would crash. Software quality does not help a thing when your hardware has been pushed past it's designed limits.

    Leave a comment:


  • Aleksei
    replied
    Originally posted by allquixotic View Post
    Also, no such in-kernel module exists for Amazon S3, or SSH, or FTP. And in the case of these very network-limited filesystems, the performance drop of the userspace indirection is probably quite insignificant, especially if you're going out over the public internet, which is thousands of times slower than the maximum bandwidth of a FUSE filesystem. You should even be able to max out a gigabit ethernet port over a LAN using a FUSE filesystem for SSH or FTP.

    Really, the people complaining aren't offering many alternatives for us to use to get higher performance. And if they are offering them, they have showstopping licensing issues in both cases I'm aware of. Maybe the simple fact that FUSE filesystems have user adoption and are successful is a little hint to the kernel community that, maybe, writing kernel code with all the special rules and regulations of Linux is more trouble than it's worth.
    So true. I use ntfs-3g (interop with dual-boot windoze) and CurlFtpFS (worked better than NFS and CIFS for me) on a daily basis.

    Leave a comment:


  • nanonyme
    replied
    Originally posted by dreh23 View Post
    Just to clarify: ZFS on linux is not FUSE based as some people say. There was a zfs fuse implementation but ZOL is far more advanced and a lot of people using it successfully in production. To read about the license incompatibility issue see: http://zfsonlinux.org/faq.html#WhatA...LicensingIssue . Btw. a lot of people believe there is even a legal way to include the zfs code into mainline kernel, but probably a court has to decide this (hello oracle). Nevertheless zfs is super stable (we use it in production for more than two years).
    Funnily enough ZOL is not installable on Debian if you want virt-sparsify on your machine. It depends on fuse-zfs

    Leave a comment:


  • dreh23
    replied
    Just to clarify: ZFS on linux is not FUSE based as some people say. There was a zfs fuse implementation but ZOL is far more advanced and a lot of people using it successfully in production. To read about the license incompatibility issue see: http://zfsonlinux.org/faq.html#WhatA...LicensingIssue . Btw. a lot of people believe there is even a legal way to include the zfs code into mainline kernel, but probably a court has to decide this (hello oracle). Nevertheless zfs is super stable (we use it in production for more than two years).

    Leave a comment:


  • movieman
    replied
    Originally posted by XorEaxEax View Post
    What are you talking about? Of course they are slower, communicating through message passing will always be slower than communicating through shared memory.
    Indeed. I was reading about the latest version of Minix yesterday after reading this thread and one thing one article said was that writing to an I/O port through the microkernel only took 500 nanoseconds. Which doesn't sound so bad until you realise that it's typically 1,000-1,500 clock cycles on a modern CPU.

    Fortunately it's not something that drivers do often (I'm guessing 90+% of I/O these days is memory mapped) and I/O writes are slow anyway, but it's still a pretty significant amount of time for what would otherwise be a simple instruction in the kernel.

    That said, kernel performance probably doesn't matter much in normal desktop use; it's much more important in specialised uses like high-performance web servers where you really don't want to be taking the hit of continually going in and out of user space to send network packets.

    Leave a comment:


  • XorEaxEax
    replied
    Originally posted by Ze.. View Post
    Micro-kernels aren't slower , yes some were slower but that was due to poor design decisions to do with process handling and inter-process communication.
    What are you talking about? Of course they are slower, communicating through message passing will always be slower than communicating through shared memory. It's true that microkernels have worked hard on improving the speed of message passing, like grouping chunks of messages instead of passing them one by one for example. BUT IT IS STILL SLOWER. That's the price you pay for the safety of truly separated processes were if one crashes it won't bring down the system or even any other process, sometimes it's worth that price but again IT'S SLOWER. Microkernels has some undeniable benefits but speed certainly isn't one of them.

    Leave a comment:

Working...
X