Announcement

Collapse
No announcement yet.

Linux hacker compares Solaris kernel code:

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • a user
    replied
    Originally posted by nasyt View Post
    .BUT: Most people agree, that linux is NOT the best Operating System.
    most people? aha...
    For example: The TCP/IP implementation is embarrassingly shaky.
    a specific problem of an os does not prove that an os is not the best one. beside of that do you wan't me to tell you about all flaws of the others os's?

    what exactly is the point of this post? showing your incompetence regarding logic of argumentation and knowledge?

    Leave a comment:


  • MartinN
    replied
    if anything matches the definition of kernel porn.... it has to be this guy's onanistic description of Solaris. Where is Solaris today? And where is Linux? I'll stick with the duct tape.

    Originally posted by kebabbert View Post
    Here is Con Kolivas, who wrote some popular Linux schedulers.




    He reviewed the Solaris scheduler and it seems he like it:
    I've been asked in past interviews what I've thought about schedulers from other operating system kernels that I may have studied, and if I ...


    "...The summary of my impression was that I was... surprised. Now I don't claim to be any kind of expert on code per-se. I most certainly have ideas, but I just hack together my ideas however I can dream up that they work, and I have basically zero traditional teaching, so you should really take whatever I say about someone else's code with a grain of salt. Well, anyway, the [Solaris] code, as I saw it, was neat. Real neat. Extremely neat. In fact, I found it painful to read after a while. It was so neatly laid out that I found myself admiring it. It seems to have been built like an aircraft. It has everything that opens and shuts, has code for just about everything I've ever seen considered on a scheduler, and it's all neatly laid out in clean code and even comments. It also appears to have been coded with an awful lot of effort to ensure it's robust and measurable, with checking and tracing elements at every corner. I started to feel a little embarrassed by what we have as our own kernel. The more I looked at the code, the more it felt like it pretty much did everything the Linux kernel has been trying to do for ages. Not only that, but it's built like an aircraft, whereas ours looks like a garage job with duct tape by comparison.

    As an aside, I did google a few terms they used which I hadn't seen before, and I was more than a little disappointed to find patents regarding the names... Sigh.

    Now this would be a great time to take my comments out of context without reading on. The problem is that here was a scheduler that did exactly what I hate about what the Linux kernel scheduler is becoming. It's a monstrosity of epic proportions, and as far as an aircraft goes, it's like taking an Airbus A380 on a short joyride if you're running it on a desktop. It looks like a good, nay, great design for a massive airliner. By looking at it alone, I haven't got the foggiest what it might run like on a desktop. Now since I'm full of opinion and rhetoric and don't ever come through with any substance (maybe writing my own scheduler invalidates that?), I'm going to also say I can't even be bothered trying it, for you to confirm your suspicions about me.

    ...the Linux kernel (scheduler) suddenly looks like the Millennium Falcon. Real fast, but held together with duct tape, and ready to explode at any minute...."



    So, he feels embarrassed over the Linux code, after studying Solaris? Hmmm....

    Leave a comment:


  • nasyt
    replied
    Originally posted by Pawlerson View Post
    He's not a Kraftman, I assume you. However, you sound exactly the same: clueless and like a troll. However, thanks God slowlaris is nearly dead and nearly nobody is using it. Even Oracle wants to kill it, so they're investing a lot in Linux.
    ...to make sure, that more Internet Servers run on shaky TCP/IP implementations.

    Originally posted by Pawlerson View Post
    I explained in another thread there are no antiBSD and antiSolaris trolls. It's impossible! It's like trolling against shit. Do you know any troll that trolls against shit? Furthermore, you're an antiIntelligence troll, is that ok? Why do you troll against intelligence?
    You are trolling and flaming against everything that is not linux. BUT: Most people agree, that linux is NOT the best Operating System.

    For example: The TCP/IP implementation is embarrassingly shaky.

    Leave a comment:


  • nasyt
    replied
    Originally posted by Pawlerson View Post
    Furthermore, if such scaling is easy then I don't understand why Linux has no competition in this market? Costs don't matter here.
    Convenience, to use already existing programs. Its the same reason, why windows holds 90% of the desktop market (as now).

    Leave a comment:


  • Ibidem
    replied
    Originally posted by kebabbert View Post
    There are workloads that are SMP type, so a HPC cluster can not run such workloads.


    Have you looked at the Linux scaling of 8 socket servers? It is really bad. Just because Linux runs on 100 core chips, it doesnt mean they scale well. For instance, Kraftman showed me a link of the 64 cpu HP Superdome designed for HP-UX. They had compiled Linux to it, and the benchmarks was bad. It had like 40% cpu utilization. The HP superdome is sold today with Linux, but the largest Linux configuration for sale is... 16 cpus I think. Linux is not supported on larger Superdome configurations. I can dig a bit if you want to see the link, where it says that the 64 cpu Superdome is only offered with 16 cpus when running Linux.
    That would be nice to see, since the link indicated that larger configurations were/are possible (though the "mx2" modules, which have two Itaniums per socket, were/are not supported on Linux)

    Some SMP workloads must be run on a "single fat" server. They can not be solved by adding more nodes. The demand is big too, these servers costs millions. And if some Linux vendor could create a cheap 32 cpu or 64 or 128 cpu server, they would sell lots of them. And get rich.
    ...

    Sure, I wonder things too.

    It is very easy to make me shut up: prove me wrong by posting credible links. If someone posts links to say, Oracle or researchers I shut up. If some random guy comes in and say things without links: I am not convinced.
    Examples of those workloads would be nice. (If you choose to link to anything about them, an explanation of why they must be run that way would be the most helpful.)


    PS: Phoronix benchmarks from this summer (http://www.phoronix.com/scan.php?pag...ris11_ubuntu12) of Solaris 11 Express vs several Linux distros look like Solaris missed quite a few benchmarks.
    In particular, note:
    -the NAS UA.A benchmark, page 2
    -FFTE and BYTE benchmarks, page 3
    -the Himeno benchmark, page 4
    -C-Ray, page 5 (note that this is "less is better", so Solaris is by far the worst for it)

    I would be curious to see RHEL/CentOS/SL vs OEL vs Solaris benchmarks on the same Oracle hardware. Any of the Linux distros vs Solaris would be informative, but OEL vs Solaris is the most interesting.

    Leave a comment:


  • intellivision
    replied
    Originally posted by Pawlerson View Post
    We can agree and laugh at you as well, because of your 'intelligent' and 'mature' replies. Let's make a competition which idiot posts more funny pictures. OpenSLOWlaris is winning at this moment, because his picture represents reality: Linux - and everyone - can do whatever they want with the BSD code. However, I don't know if it is reason to be ashamed, because as far as I know BSD motto sounds: to server others. Btw. is it enough to become a Phoronix member to get a green light for trolling without a fear of being banned?
    That's actually a funny observation, because all of those users have been banned. For trolling.

    Leave a comment:


  • mrugiero
    replied
    Originally posted by Pawlerson View Post
    The point was HTC doesn't care about OS cost. It only cares about performance. Solaris was replaced by Linux in HPC.
    I don't claim to know about that, but you didn't mention it before my first answer to the thread.
    I'd like it if you post something that confirms they only care about performance and nothing else, though.

    Originally posted by Pawlerson View Post
    This is strange what you have written. If a single Linux kernel is running on a cluster then how can you say any given node controls it's hardware? I would like you to explain this. Furthermore, if such scaling is easy then I don't understand why Linux has no competition in this market? Costs don't matter here.
    Well, for a start, this message passing interface isn't usually at kernel level, and OpenMPI is portable across OSes. I don't know if this is the most used solution but I believe it is. Most OSes have no clustering capabilities by themselves, but use this kind of software instead.

    About the single Linux kernel, I guess I expressed it wrong. What I meant is, you only need a full GNU/Linux system in the master. Nodes only needs the kernel, a simple scheduler (they manage few tasks, since most management of the workload is made in the master) and a few basic services (the only ones that comes to mind are networking and the MPI, which have to be supported in all nodes). An OS is needed in every node, but all of them but the master can be pretty basic. The hardware is controlled by each node, and thus you only need to escalate per node. MPI takes care about the bigger picture scaling (how to distribute the workload in the most efficient way). If Linux offers the best performance in computers with few cores, then you'll probably get the best performance using OpenMPI (if my assumption that it's the best/most used solution is valid) with Linux. Also, admins for Linux are probably easier to find, I THINK (but I'm not sure) it's easier to setup, etc. This step becomes important considering you must install an as basic as possible distribution in every node (and AFAIK, Linux is the simplest to do something like that) and a full OS in the master. I guess the main reason most clusters use exactly the same hardware for every node has a lot to do with simplifying the setup of the nodes.
    Anyway, I don't know enough about the other possible OSes to give accurate reasons why Linux have no competition in that field. I'm just interested in the subject of clustering, so I read a little about it.

    I must disagree about the importance of costs. Usually, you use clusters because they're the cheapest option to do the work. But since nodes aren't usually touched, licenses are paid only once (AFAIK), you could just hire an admin for whatever OS run in the nodes (the best performant you find) when/if they fail and use Linux (with a full time admin) in the master node, so costs are indeed negligible in the software side for this use.
    Last edited by mrugiero; 12 May 2013, 03:49 AM.

    Leave a comment:


  • Guest
    Guest replied
    Originally posted by mrugiero View Post
    You should notice I'm not denying its memory footprint, nor afirming it. I don't know about it, and I couldn't care less about Solaris. What I'm saying is JUST THE FACT SERVERS MIGRATE doesn't say anything about speed or memory footprint.
    Also, HTC are not the only ones using servers, so if they care or doesn't about OS prices, doesn't matter at all. Industry in general cares about cost/benefits ratio. OS prices might be a negligible difference and thus be ignores completely. Employing costs are usually important, so the fact about being easier to find a Linux admin still matters most of the time.
    The point was HTC doesn't care about OS cost. It only cares about performance. Solaris was replaced by Linux in HPC.

    The difference is you use a message passing interface between the computers to distribute the work, and then each computer keeps working as if it's a single one. Each node needs to work with its own OS. This allows you, for example, to use a more specific OS in the nodes, or run it with a real simple scheduler, since it will almost sure just receive a workload, do the work and send the results. The master is the only one which needs a complete and responsive OS. That means you could use a simpler (i.e. less bloated) OS or configuration in nodes when working with clusters, and you have to deal with a complete OS trying to escalate in the single computer, many cores. Clusters doesn't require too much of scaling, actually, since any given node's controls just it's hardware.
    This is strange what you have written. If a single Linux kernel is running on a cluster then how can you say any given node controls it's hardware? I would like you to explain this. Furthermore, if such scaling is easy then I don't understand why Linux has no competition in this market? Costs don't matter here.

    Leave a comment:


  • mrugiero
    replied
    Originally posted by Pawlerson View Post
    Yeah, right. HTC people care about OS price. It also a fact Solaris has higher memory footprint and introduces higher overhead in comparison to Linux. You don't have to believe me, but you can check this yourself and you can even find about this in google.
    You should notice I'm not denying its memory footprint, nor afirming it. I don't know about it, and I couldn't care less about Solaris. What I'm saying is JUST THE FACT SERVERS MIGRATE doesn't say anything about speed or memory footprint.
    Also, HTC are not the only ones using servers, so if they care or doesn't about OS prices, doesn't matter at all. Industry in general cares about cost/benefits ratio. OS prices might be a negligible difference and thus be ignores completely. Employing costs are usually important, so the fact about being easier to find a Linux admin still matters most of the time.

    Originally posted by OpenSLOWlaris
    Lots of people agree that Linux is Better then BSD/Solaris. If you don't like that that's your own fault.
    Lots of people agree that you'll become blind if you masturbate too much.

    Specially when there are two options with advantages and drawbacks, a lot of people will agree one or the other is better, mostly because it fits better their needs.

    Originally posted by Pawlerson View Post
    Too bad for you, because you didn't backup your claim with strong evidences. I'm repeating my question: what's the difference between CPUs and cores from an operating system standpoint? Furthermore, is it meaningful to make benchmarks using different components? Where is a link to the benchmark of a 64CPUs HP server with only 40% of CPU utilization? And two more question: why most of the vendors didn't follow a SUN's way of making scalable systems? Isn't this, because their method is legacy one and led them to bankruptcy? Why SUN was so weak in horizontal scaling and Linux has beat them in HPC?
    The difference is you use a message passing interface between the computers to distribute the work, and then each computer keeps working as if it's a single one. Each node needs to work with its own OS. This allows you, for example, to use a more specific OS in the nodes, or run it with a real simple scheduler, since it will almost sure just receive a workload, do the work and send the results. The master is the only one which needs a complete and responsive OS. That means you could use a simpler (i.e. less bloated) OS or configuration in nodes when working with clusters, and you have to deal with a complete OS trying to escalate in the single computer, many cores. Clusters doesn't require too much of scaling, actually, since any given node's controls just it's hardware.
    Last edited by mrugiero; 11 May 2013, 11:31 AM.

    Leave a comment:


  • Guest
    Guest replied
    Originally posted by kebabbert View Post
    My point is: Linux scales well on clusters. This is true. For instance Google has a cluster of 900.000 servers and Linux runs on them. But Linux does not scale well on a "single fat" server. This is evidenced by the fact that no Linux server is for sale with 32 cpus. If Linux scaled the crap out of every one else, there would be Linux servers with 32, 64 and 128 and 256 cpus, as well as large clusters. Unix people have always said that Linux does not scale well: they refer to a "single fat" server. Everybody knows that Linux scales well on clusters. That is a fact. Large super computes are just a cluster, such as SGI Altix server.
    Too bad for you, because you didn't backup your claim with strong evidences. I'm repeating my question: what's the difference between CPUs and cores from an operating system standpoint? Furthermore, is it meaningful to make benchmarks using different components? Where is a link to the benchmark of a 64CPUs HP server with only 40% of CPU utilization? And two more question: why most of the vendors didn't follow a SUN's way of making scalable systems? Isn't this, because their method is legacy one and led them to bankruptcy? Why SUN was so weak in horizontal scaling and Linux has beat them in HPC?

    Leave a comment:

Working...
X