Announcement

Collapse
No announcement yet.

AMD Shanghai Opteron: Linux vs. OpenSolaris Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Peter_Cordes
    replied
    Originally posted by kebabbert View Post
    No problem.

    Anyway, Ive told you what I consider scalability is, and you dont agree with that. Lets stop there?
    Sure, I'd be happy to.

    And besides, it's friday today! I will drink a beer for you Linux guys, so if you suddenly feel a bit drunk, that beer came from me! I really wish you a nice weekend with lots of beer and sex!
    Cheers!

    Leave a comment:


  • kebabbert
    replied
    Originally posted by llama View Post
    Yeah, sorry, I was feeling snarky. I think we just have different ways of thinking about computers. I still don't understand how you use your way of understanding things in practice, which is why I gave some examples of how I use my way. I take back the "polite way of saying that" comment, because there's no reason for me to assume your way doesn't work well for you.
    No problem.

    Anyway, Ive told you what I consider scalability is, and you dont agree with that. Lets stop there? We both agree that GNU/Linux and OpenSolaris are good working OSes with lots of users. Lets stop there. I think GNU/Linux is great and Ive learned much from it. Maybe in the future I switch to GNU/Linux or GNU/Solaris. Even SUN seems to agree that GNU is the way to go, as OpenSolaris is more like GNU than Solaris. The only result from this coopetition of GNU/Linux and GNU/Solaris will be that we sit in the middle and choose which OS and which tech we like.

    And besides, it's friday today! I will drink a beer for you Linux guys, so if you suddenly feel a bit drunk, that beer came from me! I really wish you a nice weekend with lots of beer and sex!

    Leave a comment:


  • Peter_Cordes
    replied
    This is about the Big Kernel Lock, which is the ugly hack Linux used in the early days, instead of having separate locks for each data structure that needed protection. I think the BKL isn't used anywhere important (well, that post did talk about AC working to remove it from TTY code).

    The BKL obviously doesn't scale well, but if you're not using a crufty old driver that uses it, and your workload doesn't otherwise hit much BKL-using kernel code, it won't really hurt scalabilty.

    For real-time applications, even one infrequent source of long-latency is unacceptable. e.g. you would not be happy if your music had a dropout even once in a couple hours. You can fix that with bigger buffers, but sometimes you really need low latency: e.g. running a control program for a robot seal balancing a ball on its nose. If it doesn't move soon enough in response to input that the ball is starting to roll off, the ball will fall. It doesn't matter that the average latency is great, it's the worst-case that's the deal-breaker for latency. That's the difference between how locking matters for scalability and for real-time applications.

    Different locking primitives all have their uses. e.g. see this (old) article by Robert Love: http://www.linuxjournal.com/article/5833. But mostly, if a critical section is really short, a spinlock might be more appropriate than doing a context switch and coming back when the lock is unlocked. Otherwise you probably do want to sleep instead of busy-waiting. And there are multiple-readers, single-writer locks, and RCU (read-copy-update) data structures that let readers keep using the old copy while the writer constructs the new copy.

    Leave a comment:


  • kraftman
    replied
    Originally posted by trasz View Post
    No. This was about replacing so called 'semaphores' (actually, Linux' implementation of semaphores) with so called 'mutexes'. Spin locks are still the fundamental synchronisation mechanism.

    Again, it's about _userland_ (pthreads) mutexes, which are completely unrelated to kernel synchronisation.
    Ok, what about this one:



    It's about
    it turns the BKL into an ordinary mutex and removes all
    "auto-release" BKL legacy code from the scheduler.'
    and
    The main disadvantage of giant lock is that it eliminates the concurrency, thus decrease the performance on multiprocessor systems.
    So, it seems there are no performance penalties in Linux on multiprocessor systems after this change. And following this:

    As some of the latency junkies on lkml already know it, commit 8e3e076
    ("BKL: revert back to the old spinlock implementation") in v2.6.26-rc2
    removed the preemptible BKL feature and made the Big Kernel Lock a
    spinlock and thus turned it into non-preemptible code again. This commit
    returned the BKL code to the 2.6.7 state of affairs in essence.
    there weren't any before commit mentioned above.

    I said before I believe you, but it will be nice if can give some proofs (because some people may not ).

    Btw. Will such good efforts in RTLinux be possible if "Spin locks are still the fundamental synchronisation mechanism"? If yes, I don't see anything wrong in them.


    EDIT:

    Ok, I found myself There are spinlocks and Linux devs plan to make them preemptible etc. But it's related to real time Linux approach and I'm not so sure if this affects scalability. Btw. there are some changes like memory management improvements in 2.6.28 which should improve Linux scaling.
    Last edited by kraftman; 19 February 2009, 11:47 AM.

    Leave a comment:


  • Peter_Cordes
    replied
    Originally posted by kraftman View Post
    In what? This is scalability:

    http://www.linfo.org/scalable.html
    I like that definition of scalability, and it's what I have in my when I say "scalability". That page is especially good because at the end they get to talking about why anyone should care about scalability when choosing hardware or software. e.g. knowing that we'll need a bigger system in a year after our business takes off and we have way more clients, we should go with something such that the time spent learning it won't be wasted later. I.e. we can get a bigger version of this hardware later, and the BIOS options will mostly look the same, and the lights will mean the same thing. And for software, once we learn our way around /proc and all that, we can use that knowledge when we put GNU/Linux on the upcoming bigger machine. This is definitely the case even if you customize Linux a bit for your bigger or smaller machine. Since like I said, it doesn't make it behave qualitatively different, just maybe a little faster.

    Leave a comment:


  • Peter_Cordes
    replied
    Originally posted by kebabbert View Post
    Details are important. Yes, that is true. I have a double Master's degree in Math and another in Computer Science (algorithm theory). All this math has taught me that if you have several theorems that behave almost similar, then you can abstract and make them into one theorem. If you can not, then that theory is inferior and needs to be altered into something more general. Maybe that is the reason I think that one Solaris kernel is prefered, before 42 different Linux kernels depending on the task you are trying to solve. You know, different tools for different tasks is NOT scalability. You can never state Linux kernel is scalable, when you need to use different versions.
    A generic Linux kernel will scale pretty well (e.g. from a single-core desktop to a 16 core server or larger). But you could maybe wring a little more performance out of any one situation by specializing a kernel for that specific hardware (not per workload). Most people don't compile their own, because it doesn't help that much. And you'd have to recompile your own for every security update.

    You don't need to use different versions. So stop saying that you do. A GNU/Linux distro like Ubuntu for AMD64 will scale quite well with its one universal kernel. Compiling your own custom kernel might help more at the extreme ends of the scalability range, e.g. on really big iron or single-core slow desktops, but we both agree already that no matter what you do, Linux probably isn't ready for really big iron the way Solaris is.

    Solaris install DVD is the same, no matter which machine. THAT is scalability. It is not something we have to agree on, or disagree on. It is a fact. Solaris is scalable, Linux is not. Otherwise, I could equivally say "C64 is scalable"; I just have to modify it. That is simply plain stupid to say so. It is nothing to agree upon or not, it is stupid to say so.
    You can get more scalability by building different binaries from the same code base. I don't see that as "modifying it". To get more scalability from the C64 "operating system"(?) you would need major rewrites, and first-writes of major features it doesn't have at all. Maybe you just picked an example that's too extreme, because it looks like a straw-man to me.

    I can agree with you to this extent, though: Ubuntu GNU/Linux as a distro of compiled binaries scales to the range of machines that it targets: not-too-ancient desktops up through > 16-core servers at least. To go beyond that range, it helps to start customizing the distribution's source and rebuilding parts. Specifically, you can maybe gain some performance by editting the configuration files for the Linux kernel and rebuilding that package.

    Even unmodified, Ubuntu will run on large machines (they compile the kernel with NR_CPUS=64, so cores beyond that will go unused. Linux itself claims it can be compiled for up to 512 cores). Maybe some bottlenecks will be worse than with a custom kernel that leaves out some options you don't need, and so on. I don't know how e.g. RHEL or SuSE their kernels, since I just use Ubuntu and sometime Debian.

    If you want to talk about the scalability of a specific distro, without allowing customized kernels, then that's one thing. Linux itself doesn't have an official binary, so it's native form is the source. The scalability of Linux is not just the range of machines a hypothetical distro could build a single kernel binary to do well on, it's the range of machines that the kernels built from the same source code can handle. That's how I see it, anyway. Obviously you've seen my previous statement of this definition of scalability and rejected it, so that's one place where we disagree about word definitions more than what Linux is actually like.

    But certainly you havent studied much math.
    Not a lot of formal math, no. I have an undergrad B.Sc, combined honours in physics and c.s. And I was always more interested in the understanding-how-the-world-works part of physics than in the math formalism. So yeah, I guess I didn't know how much of an analogy you were intending with the word "theorem". Theorem = proven hypothesis, right? My mental models of how computers behave aren't usually formally proven.

    so you dont understand what I am talking about or why I emphasize that all the time.
    I think I'm getting closer, but I still don't know what sort of a theorem your one all-encompassing theorem that models openSolaris behaviour would be.

    "But I couldnt think of a polite way of saying that". If you want to get sticky, we can.
    Yeah, sorry, I was feeling snarky. I think we just have different ways of thinking about computers. I still don't understand how you use your way of understanding things in practice, which is why I gave some examples of how I use my way. I take back the "polite way of saying that" comment, because there's no reason for me to assume your way doesn't work well for you.

    Leave a comment:


  • kraftman
    replied
    Originally posted by etacarinae View Post
    Does that sound fair to you ?
    Of course it's not. I would love to see some 'real world' benchmarks. Btw. I'm getting feelings that only benchmarking the same system against different settings makes sense.

    @kebabbert

    As I said I see at least two definitions of scalability, but I don't care about it too much

    Trasz
    Do you have any proof and links? Then maybe you could settle this discussion once and for all.
    Sometimes I base my opinions on observations, personal feeling etc. so he may did the same in this case. I'm able to believe him and I don't care about it too much too :>

    Leave a comment:


  • etacarinae
    replied
    Originally posted by kraftman View Post
    P.S. GNU/Solaris (Open Solaris) is quite interesting, but I don't understand why Sun, Open Solaris makers don't compile it with recommended flags? They should release optimized x86_64 version in my opinion. Phoronix wouldn't be cheating so much then
    Gee, not again! kraftman, Phoronix test suite *COMPILES THE SOFTWARE ON ITS OWN* (with an outdated gcc compiler on OpenSolaris), the binaries that come with OpenSolaris are just fine, don't worry. So once again, the grunt was about the test suite generating unoptimized binaries for OpenSolaris: 32 bit, without OpenMP support, etc., while at the same time generating optimized 64 bit binaries on Linux, with OpenMP support. Does that sound fair to you ?
    Last edited by etacarinae; 18 February 2009, 10:41 AM.

    Leave a comment:


  • kebabbert
    replied
    Originally posted by kraftman View Post
    Any system I tried worked perfectly in vbox, but there's another reason why I said so :> It seems that some of you are trying to move "battlefield" away from Solaris You can run C64 on supercomputers under emulator not natively (and C64 isn't comparable to any modern OS). The point is that Linux runs natively. Btw. what I see there are at least two definitions of scalability and probably both are correct.

    P.S. GNU/Solaris (Open Solaris) is quite interesting, but I don't understand why Sun, Open Solaris makers don't compile it with recommended flags? They should release optimized x86_64 version in my opinion. Phoronix wouldn't be cheating so much then
    No, I am not trying to move the battle field. I am only saying that give me any OS, and I claim it is possible to reprogram it so it runs on whatever machine you want. For instance, I could heavily reprogram C64 to run on a large cluster. According to you, then C64 is scalable - because it runs natively on large clusters. I dont agree with your definition, that is not scalability.

    And of course Linux guys find GNU/OpenSolaris interesting. I know Solaris guys that hate OpenSolaris. "It is not Solaris anymore" they say. It is GNU, with Solaris kernel. Of course Linux guys like Ian Murdock and his Debian, so they probably find OpenSolaris easier to like than Solaris. I personally think OpenSolaris is more Linux than Solaris. It is more Linux userland than Solaris. I am sceptical to OpenSolaris. Actually, Ive never tried OpenSolaris. Ive installed it in VB, but it took 5 min to move the mouse, so I just shut it down. I myself prefer Solaris.

    And I dont understand what you mean with they "should have OpenSolaris 64 bits"? Solaris has been 64bits for many years. Upon install it chooses between 32bit and 64bit, automatically.





    Trasz
    Do you have any proof and links? Then maybe you could settle this discussion once and for all.

    Leave a comment:


  • trasz
    replied
    No. This was about replacing so called 'semaphores' (actually, Linux' implementation of semaphores) with so called 'mutexes'. Spin locks are still the fundamental synchronisation mechanism.

    Originally posted by kraftman View Post
    Is this what you said based on some articles or you spent a while on searching lkml? :> Aren't you talking about problem with crappy malloc library?

    In this article is mentioned about pthreads mutex (it's 2000) and RTLinux:



    Mutexes are important for RT aren't they?
    Again, it's about _userland_ (pthreads) mutexes, which are completely unrelated to kernel synchronisation.

    Leave a comment:

Working...
X