Announcement

Collapse
No announcement yet.

GNU Hurd 0.5, GNU Mach 1.4 Released

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by mrugiero View Post
    I'm not sure how it changes anything on the DRM area, so can you explain it to me? As I see it, any GPL software helps you with preventing DRM, since you are free to modify it for any use.
    wikipedia: "It also adds a provision that 'strips' DRM of its legal value, so people can break the DRM on GPL software without breaking laws like the DMCA."

    It allows you to break drm if the drm system falls under gplv3 rules.


    We here in germany have such laws too...

    Comment


    • #32
      Originally posted by mrugiero View Post
      I'm not sure how it changes anything on the DRM area, so can you explain it to me? As I see it, any GPL software helps you with preventing DRM, since you are free to modify it for any use.
      GPLv3 makes sure that if someone ships GPLv3 licenced software which requires a specific key to run (DRM) then they have to supply that key or allow the user to generate their own valid keys. This is to circumvent the Tivoisation-hole where Tivo users could get the source code but couldn't modify and run their own versions on the Tivo hardware due to not having access to the signing key.

      Originally posted by mrugiero View Post
      Still, it doesn't protect you from infringing others' patents.
      No licence can ever do that.

      Comment


      • #33
        Originally posted by XorEaxEax View Post
        GPLv3 makes sure that if someone ships GPLv3 licenced software which requires a specific key to run (DRM) then they have to supply that key or allow the user to generate their own valid keys. This is to circumvent the Tivoisation-hole where Tivo users could get the source code but couldn't modify and run their own versions on the Tivo hardware due to not having access to the signing key.
        Thanks, that's what I didn't get before.

        No licence can ever do that.
        I'm fully aware. That's my point when comparing GPL3 and GPL2 on the patents issues: it doesn't really prevent patents problems very much, since the ones GPL3 protects you from (contributors putting their patented knowledge into the code base and then wanting to charge fees) are easy to solve with GPL2: you just revert the patches from such contributor. I do see the advantage on not having to revert it. Still, if the approach to a micro kernel were to fork the Linux kernel, this property could be added as a CLA for new contributions.

        Comment


        • #34
          Originally posted by mrugiero View Post
          I'm fully aware. That's my point when comparing GPL3 and GPL2 on the patents issues: it doesn't really prevent patents problems very much, since the ones GPL3 protects you from (contributors putting their patented knowledge into the code base and then wanting to charge fees) are easy to solve with GPL2: you just revert the patches from such contributor. I do see the advantage on not having to revert it. Still, if the approach to a micro kernel were to fork the Linux kernel, this property could be added as a CLA for new contributions.
          Actually GPLv3 patent protection goes further than that, first off, in your example even if you revert the patches from such a contributor they could potentially have patents on code which other people have contributed in that project and still sue, this is something GPLv3 protects against.

          Furthermore since GPL kicks in when distributing, so a patent-holding entity which distributed GPLv3 licenced code can't sue for any patents they hold in that code as they are bound by GPLv3's patent-grant as distributors. Doesn't matter if they have contributed (modified) the code or not.

          So if you are a developer afraid of potential patent-suits I'd say GPLv3 is better protection than GPLv2, of course patent-aggressive entities (hello Apple) refuses to ship GPLv3 licenced code for the above reasons.

          Comment


          • #35
            Originally posted by blackiwid View Post
            So you could say after 30 years linux cant do stuff that hurd can do, so its a failure
            That?s a way to put it which I never dared to use myself ☺

            But it?s actually not that far from reality: With thousands of developers Linux still cannot do everything the Hurd can do - with ~6 part-time developers. There might just be architectural reasons for that?

            Comment


            • #36
              Originally posted by LightBit View Post
              Hurd is actually more hybrid-kernel. It doesn't run drivers in separate processes, which is most important for stability.
              Hurd runs *some* drivers in separate processes (network drivers) and DDA provides a path towards moving all drivers out of the Mach kernel.

              Comment


              • #37
                Originally posted by jayrulez View Post
                Richard Braun (HURD guy) is working on a replacement for MACH to create a HURD-like system. He thinks the HURD is dead also.
                First, thanks for making me say what I've never said, and which I don't even think. The Hurd isn't dead. It's actually used, at least to maintain itself, a solid proof of self hosting completeness. Yes, I'm working on an experimental Hurd-like system, which is far from complete and which will probably take years to even start a simple shell. It's a hobby, nothing serious, although I do it seriously. The goal of this project is very different from the various past next-generation Hurd attempts: it retains the same high-level architecture. It's meant as a scalable and performant Hurd clone, with only minor design fixes. You can think of it as what Linux was to Unix in the 90s, a clone written from scratch for fun.

                So, if I'm writing a clone, do you really think I believe the original template is *dead* !? I also work on the Hurd itself, because despite all its defficiencies, it does work. In addition, it's useful to find out more about the details and either fix them directly in the Hurd, or think about a better global solution which I intend to implement in my clone. Besides, some parts of this clone can later be applied to the existing Hurd, when the requirements easily allow it, such as what was done with the slab allocator for the kernel.

                I must say I'm extremely disappointed in most of the comments I read on this forum. I don't know the technical level of people participating here, but most of the claims are complete bullshit (the pros as well as the cons). If you really believe in what you're writing, you'd better rethink a lot of things you consider true, or just stop computer science altogether and breed goats somewhere in the moutains, far from any Internet connection. Really. For example :

                "I mean the advanteges I refer to to make really true that "everything is a file" even if its a ftp server or something."
                Not true. You're confusing with Plan9/Inferno.

                "But this modularity is basicly the point of a micro-kernel."
                Not true, there are non-modular monolithic systems running on microkernels, such as XNU. At the same time, Linux achieves great modularity. It's possible to do it on microkernels, of course, and normally easier because you reuse existing stuff such as Unix processes and the standard linker instead of reimplementing it in a special way.

                "I think they used Mach and then switched to something called L4 as microkernel"
                Not true, the Hurd still uses Mach at its core. The L4 attempt was abandoned.

                "A microkernel is probably worse when you try to convince people to follow a specific rule, like exposing everything as a file, since they can just write servers without even looking at such rules"
                Not true, both microkernel and monolithic systems can expose everything the way they want. Think of multimedia interfaces where files are only used as rendez-vous points for example. They can also enforce whatever they want. The Hurd is meant to provide some abilities that normally require root privileges on Unix, but accessing the rights (capabilities) to do so could completely be restricted if the system had another goal. Fined-grained right management is one of the properties of capability-based systems that people focusing security usually look for.

                "Optimization and microkernels aren't really hand in hand, you know? A microkernel, by its own architecture, implies a lot of extra calls and inter-process communication. It's its nature."
                Not true. The fact that a microkernel-based system can't be faster than a monolithic one doesn't mean one shouldn't focus on performance, on the contrary. The amount of extra calls depend a lot on the architecture of the userspace system itself (Minix makes a lot of IPC, while XNU doesn't). Also, IPC can be almost as fast as a standard system call, as ten years of research have demonstrated.

                "the main reason for Hurd is not having a microkernel, but allowing to have a fully free OS"
                Not true. At least not true any more, since many free alternatives now exist. The current purpose of the Hurd is technical freedom for users. It might sound extremely fancy and shallow, but there is a point, which I describe later.

                "IIRC, Hurd started way before than Linux."
                Not true. It was actually started right before Linux, in 1990, and suffered from a cathedral style of development, which made Linux a lot faster to progress.

                "Runtime modularity is more related to reliability"
                Complete nonsense. Modularity is the ability to easily add and remove features. How does that relate to reliability ? What matters is how modularity is implemented. In the case of the Hurd, it's done by isolating features in separate protection domains (processes and their address space), with a specific set of capabilities to access the rest of the system. Changing those capabilities also allows giving different namespaces to services, something that was done very late in Linux, and is a pain to cover completely since it has to be done for each type of object that can be subject to namespaces. The real-world applications is containers ? la LXC/FreeBSD jails/Subhurds. The Hurd was able to provide that feature by its core design alone.

                "It's dead. Gone. No chance. Ever!"
                BO$$, I'm not sure how the Hurd could ever mark your childhood so deeply, but you ought to talk about it with a specialist and just STFU here. If you people who use Linux and like it could ever learn anything from it, it would be that software evolves much like living organisms, an analogy Torvalds has made on numerous occasions. Let the Hurd be one of the branches of the evolutionary tree. Why the hell would you want to cut it ? What would you gain ? Think about what you may loose !

                "Note: I prefer FreeBSD to any Linux distro."
                Right, a system that still contains unmodified code from the Mach virtual memory system (which the Hurd also uses), still has a big kernel lock because of legacy drivers, uses a splay tree for virtual map entries (!), still has no real lockless support at all (!!) ... I could go on but I think the troll is well on its way.

                "Why is this alternative needed and who needs it?"
                Not true. It's needed because I really DON'T want any random code to corrupt my network hardware ! (http://lwn.net/Articles/304105/) And because I don't want to take the risk or crashing everything when I try out a new kernel module.

                "Yes, for all intents and purposes, the HURD is dead. The HURD suffers from quite a number of deficiencies and it may be easier to re-implement it from scratch than fixing it.
                Linux is not about rewriting everything.
                No one needs to rewrite the Linux kernel."
                Not true. Linux has a history of massive subsytem rewrites. Just see how many times the block layer was revamped until 2.6. Take a look at what existed before Netfilter. Rewriting things is actually mentioned as part of the "Unix culture" in ESR's The Art of Unix Programming. And by the way, Linux is a rewrite of Unix, but it's still a Unix system. Much the same way, a Hurd clone would still be a Hurd system. Definitely not dead if there is a technical point to it.

                "A lot of people even claim that some kernels are microkernels when they are in fact monolithic kernels(Haiku' kernel, XNU. Windows NT and a few others)"
                Partially true. A microkernel doesn't imply a multiserver system, but there still is a microkernel (or at least a hybrid).

                "If you start ALL THE SET from scratch, you will get a virtually non-advancing project, such as Hurd is."
                Not true. The Hurd actually reuses a lot of code from Linux. The focus of the project is the system itself, not what runs around its core such as drivers. The fact the Hurd is progressing very slowly is simply a question of manpower.

                "The problem with GPLv3 is as simple as the fact it's almost impossible to not step over someone's license, and nobody wants to take the risk of being responsible of it. That's the only "advantage" when using GPLv3"
                I guess we didn't read the same license. The point of GPLv3 is to make sure users can run the software they want when the license should allow it, instead of merely protecting the source code, but since this is a bit off-topic, I won't dive into that. For the Hurd, we currently don't mind either versions 2 or 3 of the license.

                "Also, writing from scratch a kernel AND a lot of servers is a lot more work and a far less efficient than just rewriting the most basic part of a kernel to a micro kernel and refactoring a bit of the Linux kernel to allow it to run as a server"
                Not true. Well actually, what you're describing is almost impossible, and if it's ever done, it would only result in a "spaghetti system". One advantage/problem with monolithic kernels is that they provide relaxed interfaces, that any part of the kernel can bypass for efficiency. Monolithic kernel proponents describe that as an integration advantage while microkernel advocates consider it a big violation of the "single responsibility principle" that is normally applied to object-oriented software (and when I say object-oriented, I don't mean C++ or Java ! I mean objects invoked through methods). Both GNU Mach and the Hurd are object-oriented pieces of software, by design. So merely using Linux code within different servers and transparently replacing some function calls with RPCs would completely break this principle of responsibility over implemented objects. We do it when it's possible, e.g. for the networking stack and device drivers, which are really sensitive to API changes, but that's all.

                "Hurd is actually more hybrid-kernel. It doesn't run drivers in separate processes, which is most important for stability.
                If you really want micro-kernel, QNX is your best choice (sadly it is proprietary and is dying)."
                Partially true. GNU Mach (not the Hurd) is a hybrid because it implements capabilities, high level virtual memory (memory allocation, copy on write, anonymous memory), resource containers (tasks), complex thread scheduling, and most device drivers. That is still too much to consider it a true microkernel like L4. On the other hand, a hybrid has its advantages since it allows reducing the amount of communication for things we may not want to replace by a userspace implementation (just look at how many operations and servers are involved for a mere malloc on an L4 based system). I agree QNX is a good implementation to consider, although it's still not a true microkernel by modern standards, since it also implements capabilities (channels and connections) in the kernel, unless I'm mistaken.


                For those interested in why some people are still working on the Hurd :
                The Hurd is meant to provide extensibility at low level, by providing the ability to create custom (non-file) interfaces, without changing the kernel or the existing system servers. Although we can provide a file interface to many objects, the same way e.g. procfs does for processes, this isn't a goal, just a nice feature. The goal is user freedom, and by that, I mean allowing users to precisely select the system services they want to use, giving them the ability to use their own if they want to, and have the right to. Having the right here doesn't imply root privileges. For example, if any user can open a tcp port, any user can create a vpn over it, without interfering with global system networking. If any user can connect to a ftp server, any user can represent the content of that server as a file hierarchy without asking root. If any user has the right to create tasks and allocate memory, any user should be able to run her own Hurd instance without annoying anyone else beyond the consumption of those resources. THAT is the true goal of the Hurd, maximum user freedom and system extensibility. It's understandable that many people consider this useless since users are normally root on their own machine now, but it's actually a huge feature NOT to become root for security and reliability, even today.

                Because it's a capability-based system, you can now think "gaining privileges" instead of "becoming root and dropping privileges". You can think "if a user doesn't have the right - capability - to do something, he really can't. This doesn't depend on the many checks a monolithic kernel would perform on each system call to make sure the caller is allowed to perform something. If you don't have the capability (defined as a reference-right) you *cannot* access the target object. A real world example would be a FTP server, where processes gain privileges (identities) after logging with username and password, without ever becoming root. If you never become root, you can't compromise the rest of the system. That's for security and reliability.

                And the major difference between the Hurd and other capability-based systems is that it applies to the file system, which is used as a service directory, where normal Unix permissions determine user access. In other words, if a user owns a node, he can make whatever he wants implement it, without asking root. That's for extensibility and technical user freedom.

                Comment


                • #38
                  Originally posted by jayrulez View Post
                  Richard Braun (HURD guy) is working on a replacement for MACH to create a HURD-like system. He thinks the HURD is dead also.
                  First, thanks for making me say what I've never said, and which I don't even think. The Hurd isn't dead. It's actually used, at least to maintain itself, a solid proof of self hosting completeness. Yes, I'm working on an experimental Hurd-like system, which is far from complete and which will probably take years to even start a simple shell. It's a hobby, nothing serious, although I do it seriously. The goal of this project is very different from the various past next-generation Hurd attempts: it retains the same high-level architecture. It's meant as a scalable and performant Hurd clone, with only minor design fixes. You can think of it as what Linux was to Unix in the 90s, a clone written from scratch for fun.

                  So, if I'm writing a clone, do you really think I believe the original template is *dead* !? I also work on the Hurd itself, because despite all its defficiencies, it does work. In addition, it's useful to find out more about the details and either fix them directly in the Hurd, or think about a better global solution which I intend to implement in my clone. Besides, some parts of this clone can later be applied to the existing Hurd, when the requirements easily allow it, such as what was done with the slab allocator for the kernel.

                  I must say I'm extremely disappointed in most of the comments I read on this forum. I don't know the technical level of people participating here, but most of the claims are complete bullshit (the pros as well as the cons). If you really believe in what you're writing, you'd better rethink a lot of things you consider true, or just stop computer science altogether and breed goats somewhere in the mountains, far from any Internet connection. Really. For example :

                  "I mean the advanteges I refer to to make really true that "everything is a file" even if its a ftp server or something."
                  Not true. You're confusing with Plan9/Inferno. On the Hurd, it would rather be "everything is an object", where objects can implement one or more interfaces.

                  "But this modularity is basicly the point of a micro-kernel."
                  Not true, there are monolithic systems running on microkernels, such as XNU. At the same time, Linux achieves great modularity. It's possible to do it on microkernels, of course, and normally easier because you reuse existing stuff such as Unix processes and the standard linker instead of reimplementing it in a special way.

                  "I think they used Mach and then switched to something called L4 as microkernel"
                  Not true, the Hurd still uses Mach at its core. The L4 attempt was abandoned.

                  "A microkernel is probably worse when you try to convince people to follow a specific rule, like exposing everything as a file, since they can just write servers without even looking at such rules"
                  Not true, both microkernel and monolithic systems can expose everything the way they want. Think of multimedia interfaces where files are only used as rendez-vous points for example. They can also enforce whatever they want. The Hurd is meant to provide some abilities that normally require root privileges on Unix, but accessing the rights (capabilities) to do so could completely be restricted if the system had another goal. Fined-grained right management is one of the properties of capability-based systems that people focusing on security usually look for.

                  "Optimization and microkernels aren't really hand in hand, you know? A microkernel, by its own architecture, implies a lot of extra calls and inter-process communication. It's its nature."
                  Not true. The fact that a microkernel-based system can't be faster than a monolithic one doesn't mean one shouldn't focus on performance, on the contrary. Also, the amount of extra calls depend a lot on the architecture of the userspace system itself (Minix makes a lot of IPC, while XNU doesn't). Finally, IPC latency can be very close to that of a system call, as fifteen years of research have demonstrated.

                  "the main reason for Hurd is not having a microkernel, but allowing to have a fully free OS"
                  Not true. At least not true any more, since many free alternatives now exist. The current purpose of the Hurd is technical freedom for users. It might sound extremely fancy and shallow, but there is a point, which I describe later.

                  "IIRC, Hurd started way before than Linux."
                  Not true. It was actually started right before Linux, in 1990, and suffered from a cathedral style of development, which made Linux a lot faster to progress.

                  "Runtime modularity is more related to reliability"
                  Complete nonsense. Modularity is the ability to easily add and remove features. How does that relate to reliability ? What matters is how modularity is implemented. In the case of the Hurd, it's done by isolating features in separate protection domains (processes and their address space), with a specific set of capabilities to access the rest of the system. Changing those capabilities also allows giving different namespaces to services, something that was done very late in Linux, and is a pain to cover completely since it has to be done for each type of object that can be subject to namespaces. The real-world applications is containers a la LXC/FreeBSD jails/Subhurds. The Hurd was able to provide that feature by its core design alone.

                  "It's dead. Gone. No chance. Ever!"
                  BO$$, I'm not sure how the Hurd could ever mark your childhood so deeply, but you ought to talk about it to a specialist and just STFU here. If you people who use Linux and like it could ever learn anything from it, it would be that software evolves much like living organisms, an analogy Torvalds has made on numerous occasions. Let the Hurd be one of the branches of the evolutionary tree. Why the hell would you want to cut it ? What would you gain ? Think about what you may loose !

                  "Note: I prefer FreeBSD to any Linux distro."
                  Right, a system that still contains unmodified code from the Mach virtual memory system (which the Hurd also uses), still has a big kernel lock because of legacy drivers, uses a splay tree for virtual map entries (!), still has no real lockless support at all (!!) ... I could go on but I think the troll is well on its way.

                  "Why is this alternative needed and who needs it?"
                  Not true. It's needed because I really DON'T want any random code to corrupt my network hardware ! (http://lwn.net/Articles/304105/) And because I don't want to take the risk of crashing everything when I try out a new kernel module.

                  "Yes, for all intents and purposes, the HURD is dead. The HURD suffers from quite a number of deficiencies and it may be easier to re-implement it from scratch than fixing it.
                  Linux is not about rewriting everything.
                  No one needs to rewrite the Linux kernel."
                  Not true. Linux has a history of massive subsytem rewrites. Just see how many times the block layer was revamped until 2.6. Take a look at what existed before Netfilter. Rewriting things is actually mentioned as part of the "Unix culture" in ESR's The Art of Unix Programming. And by the way, Linux is a rewrite of Unix, but it's still a Unix system. Much the same way, a Hurd clone would still be a Hurd system. Definitely not dead if there is a technical point to it.

                  "A lot of people even claim that some kernels are microkernels when they are in fact monolithic kernels(Haiku' kernel, XNU. Windows NT and a few others)"
                  Partially true. A microkernel doesn't imply a multiserver system, but there still is a microkernel (or at least a hybrid).

                  "If you start ALL THE SET from scratch, you will get a virtually non-advancing project, such as Hurd is."
                  Not true. The Hurd actually reuses a lot of code from Linux. The focus of the project is the system itself, not what runs around its core such as drivers. The fact the Hurd is progressing very slowly is simply a question of manpower.

                  "The problem with GPLv3 is as simple as the fact it's almost impossible to not step over someone's license, and nobody wants to take the risk of being responsible of it. That's the only "advantage" when using GPLv3"
                  I guess we didn't read the same license. The point of GPLv3 is to make sure users can run the software they want when the license should allow it, instead of merely protecting the source code, but since this is a bit off-topic, I won't dive into that. For the Hurd, we currently don't mind either versions 2 or 3 of the license. How did the discussion shift to licenses anyway ?!

                  "Also, writing from scratch a kernel AND a lot of servers is a lot more work and a far less efficient than just rewriting the most basic part of a kernel to a micro kernel and refactoring a bit of the Linux kernel to allow it to run as a server"
                  Not true. Well actually, what you're describing is almost impossible, and if it's ever done, it would only result in an ugly system. One advantage/problem with monolithic kernels is that they provide relaxed interfaces, that any part of the kernel can bypass for efficiency. Monolithic kernel proponents describe that as an integration advantage while microkernel advocates consider it a big violation of the "single responsibility principle" that is normally applied to object-oriented software (and when I say object-oriented, I don't mean C++ or Java ! I mean objects invoked through methods). Both GNU Mach and the Hurd are object-oriented pieces of software, by design. So merely using Linux code within different servers and transparently replacing some function calls with RPCs would completely break this principle of responsibility over implemented objects. We do reuse Linux code for bulk subsystems such as drivers and the networking stack, but that's about it.

                  "Hurd is actually more hybrid-kernel. It doesn't run drivers in separate processes, which is most important for stability.
                  If you really want micro-kernel, QNX is your best choice (sadly it is proprietary and is dying)."
                  Partially true. GNU Mach (not the Hurd) is a hybrid because it implements capabilities, high level virtual memory (memory allocation, copy on write, anonymous memory), resource containers (tasks), complex thread scheduling, and most device drivers. That is still too much to consider it a true microkernel like L4. On the other hand, a hybrid has its advantages since it allows reducing the amount of communication for things we may not want to replace by a userspace implementation (just look at how many operations and servers are involved for a mere malloc on an L4 based system). I agree QNX is a good implementation to consider, although it's still not a true microkernel by modern standards, since it also implements capabilities (channels and connections) in the kernel, unless I'm mistaken.


                  For those interested in why some people are still working on the Hurd :
                  The Hurd is meant to provide extensibility at low level, by giving the ability to create custom (file or non-file) interfaces and servers, without changing the kernel or the existing system servers. Although we can provide a file interface to many objects, the same way e.g. procfs does for processes, this isn't a goal, just a nice feature. The goal is user freedom, and by that, I mean allowing users to precisely select the system services they want to use, giving them the ability to use their own if they want to, and have the right to. Having the right here doesn't imply root privileges. For example, if any user can open a TCP port, any user can create a VPN over it, without interfering with global system networking. If any user can connect to a FTP server, any user can represent the content of that server as a file hierarchy without asking root. If any user has the right to create tasks and allocate memory, any user should be able to run her own Hurd instance without annoying anyone else beyond the consumption of those resources. THAT is the true goal of the Hurd, maximum user freedom and system extensibility. It's understandable that many people consider this useless since users are normally root on their own machine, and things like FUSE are available now, but it's actually a huge feature NOT to become root for security and reliability, even today.

                  Because it's a capability-based system, you can now think "gaining privileges" instead of "becoming root and dropping privileges". You can think "if a user doesn't have the right - capability - to do something, he really can't". This doesn't depend on the many checks a monolithic kernel would perform on each system call to make sure the caller is allowed to perform something. If you don't have the capability (defined as a reference-right) you _cannot_ access the target object. A real world example would be a FTP server, where processes gain privileges (identities) after logging in with a username and password, without ever becoming root. If you never become root, you can't compromise the rest of the system (notwithstanding other world accessible vulnerabilities of course...). That's for security and reliability.

                  And the major difference between the Hurd and other capability-based systems is that it uses the virtual file system as a service directory, where normal Unix permissions determine user access. In other words, if a user owns a node, she can make whatever she wants implement it, without asking root. That's for extensibility and technical user freedom.

                  Now stop spreading crap, and go read more papers and code if you're really interested in that topic. I won't waste any more time than I already have here.

                  Comment


                  • #39
                    Originally posted by rbraun View Post
                    "I think they used Mach and then switched to something called L4 as microkernel"
                    Not true, the Hurd still uses Mach at its core. The L4 attempt was abandoned.
                    Errr, that quote is mine, and I was talking about Darwin, which IIRC is FreeBSD as a server running on top of Mach before, and L4 later. But as I said a few words later, I didn't review it again to confirm my claim.

                    "A microkernel is probably worse when you try to convince people to follow a specific rule, like exposing everything as a file, since they can just write servers without even looking at such rules"
                    Not true, both microkernel and monolithic systems can expose everything the way they want. Think of multimedia interfaces where files are only used as rendez-vous points for example. They can also enforce whatever they want. The Hurd is meant to provide some abilities that normally require root privileges on Unix, but accessing the rights (capabilities) to do so could completely be restricted if the system had another goal. Fined-grained right management is one of the properties of capability-based systems that people focusing on security usually look for.
                    Can it enforce any given subsystem (not implemented INSIDE the kernel, I mean) to expose things as files?

                    "Optimization and microkernels aren't really hand in hand, you know? A microkernel, by its own architecture, implies a lot of extra calls and inter-process communication. It's its nature."
                    Not true. The fact that a microkernel-based system can't be faster than a monolithic one doesn't mean one shouldn't focus on performance, on the contrary. Also, the amount of extra calls depend a lot on the architecture of the userspace system itself (Minix makes a lot of IPC, while XNU doesn't). Finally, IPC latency can be very close to that of a system call, as fifteen years of research have demonstrated.
                    My statement wasn't that one shouldn't focus on performance, but rather that I don't think that's the right design if performance is the top priority.


                    "IIRC, Hurd started way before than Linux."
                    Not true. It was actually started right before Linux, in 1990, and suffered from a cathedral style of development, which made Linux a lot faster to progress.
                    I stand corrected.

                    "Runtime modularity is more related to reliability"
                    Complete nonsense. Modularity is the ability to easily add and remove features. How does that relate to reliability ?
                    It relates to reliability simply because you can take down and restart faulty features without going completely down. At least, that's what I think.

                    What matters is how modularity is implemented. In the case of the Hurd, it's done by isolating features in separate protection domains (processes and their address space), with a specific set of capabilities to access the rest of the system. Changing those capabilities also allows giving different namespaces to services, something that was done very late in Linux, and is a pain to cover completely since it has to be done for each type of object that can be subject to namespaces. The real-world applications is containers a la LXC/FreeBSD jails/Subhurds. The Hurd was able to provide that feature by its core design alone.
                    Thanks for the information.

                    "It's dead. Gone. No chance. Ever!"
                    BO$$, I'm not sure how the Hurd could ever mark your childhood so deeply, but you ought to talk about it to a specialist and just STFU here. If you people who use Linux and like it could ever learn anything from it, it would be that software evolves much like living organisms, an analogy Torvalds has made on numerous occasions. Let the Hurd be one of the branches of the evolutionary tree. Why the hell would you want to cut it ? What would you gain ? Think about what you may loose !
                    Don't try to argue with BO$$, he's just a troll. Take it as a friendly advice.

                    "Note: I prefer FreeBSD to any Linux distro."
                    Right, a system that still contains unmodified code from the Mach virtual memory system (which the Hurd also uses), still has a big kernel lock because of legacy drivers, uses a splay tree for virtual map entries (!), still has no real lockless support at all (!!) ... I could go on but I think the troll is well on its way.
                    He stated what he likes, not the reasons, so I don't see how showing its flaws can refute the statement.

                    "If you start ALL THE SET from scratch, you will get a virtually non-advancing project, such as Hurd is."
                    Not true. The Hurd actually reuses a lot of code from Linux. The focus of the project is the system itself, not what runs around its core such as drivers. The fact the Hurd is progressing very slowly is simply a question of manpower.
                    Still, I think it would progress faster if it used the approach I suggested.

                    "The problem with GPLv3 is as simple as the fact it's almost impossible to not step over someone's license, and nobody wants to take the risk of being responsible of it. That's the only "advantage" when using GPLv3"
                    I guess we didn't read the same license. The point of GPLv3 is to make sure users can run the software they want when the license should allow it, instead of merely protecting the source code, but since this is a bit off-topic, I won't dive into that. For the Hurd, we currently don't mind either versions 2 or 3 of the license. How did the discussion shift to licenses anyway ?!
                    I already said that statement was wrong, but thanks for pointing it out again.

                    "Also, writing from scratch a kernel AND a lot of servers is a lot more work and a far less efficient than just rewriting the most basic part of a kernel to a micro kernel and refactoring a bit of the Linux kernel to allow it to run as a server"
                    Not true. Well actually, what you're describing is almost impossible, and if it's ever done, it would only result in an ugly system. One advantage/problem with monolithic kernels is that they provide relaxed interfaces, that any part of the kernel can bypass for efficiency. Monolithic kernel proponents describe that as an integration advantage while microkernel advocates consider it a big violation of the "single responsibility principle" that is normally applied to object-oriented software (and when I say object-oriented, I don't mean C++ or Java ! I mean objects invoked through methods). Both GNU Mach and the Hurd are object-oriented pieces of software, by design. So merely using Linux code within different servers and transparently replacing some function calls with RPCs would completely break this principle of responsibility over implemented objects. We do reuse Linux code for bulk subsystems such as drivers and the networking stack, but that's about it.
                    It is an ugly way to provide the features. The point was to use that as a stopgap, not to do that and call it a day.
                    Anyway, I stand corrected, since you already reuse code where it's useful.


                    "the main reason for Hurd is not having a microkernel, but allowing to have a fully free OS"
                    Not true. At least not true any more, since many free alternatives now exist. The current purpose of the Hurd is technical freedom for users. It might sound extremely fancy and shallow, but there is a point, which I describe later.

                    For those interested in why some people are still working on the Hurd :
                    The Hurd is meant to provide extensibility at low level, by giving the ability to create custom (file or non-file) interfaces and servers, without changing the kernel or the existing system servers. Although we can provide a file interface to many objects, the same way e.g. procfs does for processes, this isn't a goal, just a nice feature. The goal is user freedom, and by that, I mean allowing users to precisely select the system services they want to use, giving them the ability to use their own if they want to, and have the right to. Having the right here doesn't imply root privileges. For example, if any user can open a TCP port, any user can create a VPN over it, without interfering with global system networking. If any user can connect to a FTP server, any user can represent the content of that server as a file hierarchy without asking root. If any user has the right to create tasks and allocate memory, any user should be able to run her own Hurd instance without annoying anyone else beyond the consumption of those resources. THAT is the true goal of the Hurd, maximum user freedom and system extensibility. It's understandable that many people consider this useless since users are normally root on their own machine, and things like FUSE are available now, but it's actually a huge feature NOT to become root for security and reliability, even today.

                    Because it's a capability-based system, you can now think "gaining privileges" instead of "becoming root and dropping privileges". You can think "if a user doesn't have the right - capability - to do something, he really can't". This doesn't depend on the many checks a monolithic kernel would perform on each system call to make sure the caller is allowed to perform something. If you don't have the capability (defined as a reference-right) you _cannot_ access the target object. A real world example would be a FTP server, where processes gain privileges (identities) after logging in with a username and password, without ever becoming root. If you never become root, you can't compromise the rest of the system (notwithstanding other world accessible vulnerabilities of course...). That's for security and reliability.

                    And the major difference between the Hurd and other capability-based systems is that it uses the virtual file system as a service directory, where normal Unix permissions determine user access. In other words, if a user owns a node, she can make whatever she wants implement it, without asking root. That's for extensibility and technical user freedom.

                    Now stop spreading crap, and go read more papers and code if you're really interested in that topic. I won't waste any more time than I already have here.
                    Thanks for the information, again.
                    Last edited by mrugiero; 01 October 2013, 11:55 PM.

                    Comment


                    • #40
                      First, thanks for making me say what I've never said, and which I don't even think.
                      I never said you said that. I assumed this to be your opinion after reading http://www.gnu.org/software/hurd/mic...iciencies.html

                      One could understand why I would make that assumption after reading same.

                      Nevertheless, I apologize for attributing that opinion to you without confirmation.

                      "Note: I prefer FreeBSD to any Linux distro."
                      Right, a system that still contains unmodified code from the Mach virtual memory system (which the Hurd also uses), still has a big kernel lock because of legacy drivers, uses a splay tree for virtual map entries (!), still has no real lockless support at all (!!) ... I could go on but I think the troll is well on its way.
                      So I am a troll because I have a preference? I expected better from you Richard Braun.

                      "Why is this alternative needed and who needs it?"
                      Not true. It's needed because I really DON'T want any random code to corrupt my network hardware ! (http://lwn.net/Articles/304105/) And because I don't want to take the risk of crashing everything when I try out a new kernel module.
                      Note: I never said that an alternative isn't needed. I was just asking the poster why and who needs it because he never stated. You may need to work on your reading comprehension skills sir.

                      "Yes, for all intents and purposes, the HURD is dead. The HURD suffers from quite a number of deficiencies and it may be easier to re-implement it from scratch than fixing it.
                      Linux is not about rewriting everything.
                      No one needs to rewrite the Linux kernel."
                      Not true. Linux has a history of massive subsytem rewrites. Just see how many times the block layer was revamped until 2.6. Take a look at what existed before Netfilter. Rewriting things is actually mentioned as part of the "Unix culture" in ESR's The Art of Unix Programming. And by the way, Linux is a rewrite of Unix, but it's still a Unix system. Much the same way, a Hurd clone would still be a Hurd system. Definitely not dead if there is a technical point to it.
                      Again, brush up on your reading comprehension skills.

                      "A lot of people even claim that some kernels are microkernels when they are in fact monolithic kernels(Haiku' kernel, XNU. Windows NT and a few others)"
                      Partially true. A microkernel doesn't imply a multiserver system, but there still is a microkernel (or at least a hybrid).
                      I did not state that. Also, going by the most popular definitions of a microkernel, those kernels do not fall under that category. You should know well that there are varying definitions for the term "microkernel".

                      Now I'll head back to the hills to herd some goats or something. You can go back to being a dick.
                      or just stop computer science altogether and breed goats somewhere in the mountains
                      Last edited by jayrulez; 02 October 2013, 01:26 AM.

                      Comment

                      Working...
                      X