Announcement

Collapse
No announcement yet.

KDE Plasma 5 Desktop Will Enter FreeBSD "When It's Stable & Usable"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by nasyt View Post
    Basically, my point behint this thought is, if the BSD guys stop their BSD projects, they should not cease coding, and they should not stoop to wasting their time with Linux and give these Linux trolls like beetreetime the victory. If the BSD guys could do kickstart a new project that causes more these Linux trolls more grief than BSD itself, they should do it.
    I'm pretty sure BSD devs are decent enough to decide how to spend their time on their own. At the end of day, BSD kernels aren't like Minix or Hurd. So it is quite unlikely monolithic kernel devs are going to be fond of idea to deal with microkernels instead. This is quite different area of expertise.

    Btw, all these fairy tales it is easier to develop things in user mode are long outdated. These days it is normal to see devs trying hazardous things in qemu, so plenty of bug reports to Linux kernels mentions qemu these days. This advantage of microkernels does not exists anymore. Yet it easier and faster to do things in kernel mode when it comes to doing something with hardware. To make it more fun, Linux kernel these days provides some (optional) facilities to help in writing some drivers in user mode, mostly for those who does not wants to fiddle with kernel development and does not needs superb speeds and latencies. Not like this got widely used or something, but these parts are present in kernel and could be enabled and used if someone needs it. It seems monolithic kernels devs aren't entirely against of taking some ideas from microkernel world.

    And TBH I do think BSDs could have possibly better future than e.g. Minix and somesuch.
    Quite possible. Monolithic kernels are easier to develop and there're more devs already aware how to do it right.

    About the point of Robustness:
    Lets say, there is an interaction between two computers, an Amiga computer (as client) and a GCOS Mainframe (as server). The Amiga TCP/IP stack contains a vast amount of RFC-violations. The GCOS TCP/IP stack has no known exploit, but it is a nitpicker. So the result is, every time the Amiga computer sends a malformed TCP packet, the GCOS Mainframe will simply reset this connection. This doesn't constitutes the robustness of GCOS in my oppinion. (Disclaimer: this example is purely fictional)
    I think if you need reliability, your applications should take care of it, e.g. by re-establishing connections, etc. Or maybe using UDP. And somesuch. At the end of day, in real world one could face firewalls doing all sorts of odd things (you can read about e.g. Great Chinese Firewall to get idea how nasty it could be, corporate firewalls aren't better either). There're NATs which were never part of initial IP design. There're proxies and somesuch. At the end of day, just compliance to RFC would be not enough, you'll have to take real world deviations into account on all layers, ranging from OS kernel who have to ensure it at least does not fails badly on bogus network packets of all kinds to application layer where apps have to take care of the fact user could be heavily firewalled or maybe using corporate proxy. That's why SIP and XMPP suxx while proprietary skype takes over the world. Skype is okay with working in real-world infrastructure full of odds, SIP is just getting stuck on nearby NAT and needs manual attention. That's how NOT to do your network protocols.

    Well IMHO Robustness in protocol implementations is about being free of bugs and exploits, but also about being tolerant to somehow malfunctioning or not RFC compliant peers.
    It depends on behavior. First you can't have a priori knowledge of all quirks remote systems could expose. Furthermore, someone can launch packets generator doing truly arbitrary things. This way you can face all known and unknown quirks interleaved in arbitrary way. You can even receive someone's /dev/urandom dump instead of proper protocol. Nothing wrong with it, it is possible to do, hence it have to be expected. Then, some remote systems could try to abuse this fact into their favor, e.g. to try to gain priority over others or maybe even to try to cause denial of service attacks. Even heard about "Treason uncloaked!" message? Well, these days Linux kernel is a bit more calm on reporting TCP/IP wrongdoers trying to DoS TCP/IP stack. Do you know what these wrenches are trying to do? They trying to shrink TCP window to 0 size. Protocol allows to request it, but if remote blindly does what asked, it leads to stall of data transfer for stupid reason and then TCP connection is going to stay for a while, taking resources. By repeating this trick a lot one can try to exhaust system resources so attacked system can't set up new TCP connections. SYN flood is more simple and brute, yet it still could be not so easy to fend it off, etc. Networks were not created with aggressive behavior in mind, so it plays poor joke these days. Still, I do not remember major network problems in Linux or BSDs. This hardly could be considered major issue. These days most systems are hijacked "thanks" to low-quality web software, overgrown browsers and so on. Somehow, Minix would not save you from web software exploits or browser bugs. On other hand, containers and VMs could confine such software, making it easier to detect intrusions and ensuring it is much less rewarding than it has been meant to be.

    Thats already simple: Reliability/Dependability - Approximately 15 Kernel bugs vs. 15.000 Kernel bugs. :-)
    And HelloWorld program could contain 0 bugs, but somehow it brings no fun. You see, who needs empty system capable of nothing? When OS and drivers are getting more feautures, more bugs would appear. Some bugs can be (ab)used to do undesirable things in unxepected ways. And software quality is more or less the same interms of bugs/kloc. Sure, Linux kernel can be maybe 2 or even 10 times better than e.g. web software, but if you have got 100 000 lines of code, you HAVE to expect bugs. And you'll have hard time to render e.g. Phoronix using small code. And servers are usually pwned due to bugs in web software. Needless to say these bugs are going to stay regardless of using Minix or whatever. Sorry, but bugged plugin for wordpress stays bugged regardless of underlying OS :P.

    Comment


    • #32
      Originally posted by SystemCrasher View Post
      Basically, my point behint this thought is, if the BSD guys stop their BSD projects,...
      I'm pretty sure BSD devs are decent enough to decide how to spend their time on their own. At the end of day, BSD kernels aren't like Minix or Hurd. So it is quite unlikely monolithic kernel devs are going to be fond of idea to deal with microkernels instead. This is quite different area of expertise.
      My Point is not about how good Microkernels are. It was a pure reaction to beetreetime saying:

      Seriously, BSD assholes should just give up, go to Linux and apologize to Torvalds and Stallman or just die. Seriously
      And I say: "NO! BSD devs should rather go to Microsoft and make Windows Server better than Linux RATHER THAN apologizing for anything to Torvalds and Stallman!!!"

      And I say this despite I really dislike Microsoft. I hope THIS explains my view.

      Comment

      Working...
      X