Announcement

Collapse
No announcement yet.

AMD Shanghai Opteron: Linux vs. OpenSolaris Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Eh, for the paranoids, there's 'zfs scrub'. So I get it you run a fsck every second just to make sure your unaccessed data is not corrupted ? As for defragmentation, ZFS does that behind the scenes. There are obviously some bugs with it, but you have to remember that ZFS is much more mature, and at this point BTRFS is barely useable (e.g. no ENOSPC checks, try it with 80% space filled and see the cpu usage...). You could say that BTRFS will have all this and more and will generally kick ass, but it's not so yet, if it at any point
    will become as stable, fast and easy to use as ZFS, I'll be the first to praise it.

    Originally posted by drag View Post
    So I take it that you enjoy the feeling of having corruption on your file system that goes undetected until you try to actually access it.

    Because, sure checksums are nice (which BTRFS can use also), but they do not automatically prevent any corruption or automatically detect it unless your actually using that file or portion of the file system.

    How you find problems is to actually scan and examine the filesystem. If your using checksums, for example, they can only be used to detect the problem if you actually going to read the data and compare it to the checksums. This act of looking at files systematically is called "A File System Check" and this is done using 'FSCK'.

    ZFS by lacking a FSCK does not mean that it doesn't not need it's file system checked... it just means that it's missing features compared to Btrfs.

    And same thing with fragmentation. All file systems fragment. I don't give a shit what FS your using or your OS. It's going to happen one way or the other under the right circumstances. And I bet that with ZFS when your getting 80-90% total usage of your storage medium it's going to start fragmenting badly.

    Again the lack of defrag for ZFS does not mean that ZFS does not fragment. It merely means that it lacks another feature compared to BTRFS.

    It's like saying if I build a laptop without a on/off switch this means that I never have to worry about my batteries running out.
    Last edited by etacarinae; 09 February 2009, 03:06 PM.

    Comment


    • #32
      Originally posted by etacarinae View Post
      Eh, for the paranoids, there's 'zfs scrub'. So I get it you run a fsck every second just to make sure your unaccessed data is not corrupted ? As for defragmentation, ZFS does that behind the scenes. .
      Ah.

      So ZFS DOES actually have FSCK! And it has online Defrag!

      Or doesn't it? You just told me a little bit ago that it does not need these features....



      Like I said before the two nice things that Solaris has going for it is ZFS and Dtrace. Everything else is 'meh'.

      Comment


      • #33
        Heh, I'm saying that a user wouldn't need to care about that, it's all done behind the scenes.

        As for 'meh', sure that's for you, the users of OpenSolaris disagree.

        Anyway, this misses the whole point of this thread. The article is highly misleading because it compares gcc 3.4.3 with gcc 4.2 and 4.3.
        What should have been done is they should've asked the OpenSolaris community re what compiler is the de-facto used one. They would have been told to use SunStudio Express, with -m64 -fast C/CXX flags, which would optimize the code for the processor used and is what most OpenSolaris users use. But no, it's so much more exciting to say "boohoo, ancient gcc on OpenSolaris doesn't support out latest AMD processor", well, doh...

        Originally posted by drag View Post
        Ah.

        So ZFS DOES actually have FSCK! And it has online Defrag!

        Or doesn't it? You just told me a little bit ago that it does not need these features....



        Like I said before the two nice things that Solaris has going for it is ZFS and Dtrace. Everything else is 'meh'.

        Comment


        • #34
          Originally posted by etacarinae View Post
          Heh, I'm saying that a user wouldn't need to care about that, it's all done behind the scenes.

          As for 'meh', sure that's for you, the users of OpenSolaris disagree.

          Anyway, this misses the whole point of this thread. The article is highly misleading because it compares gcc 3.4.3 with gcc 4.2 and 4.3.
          What should have been done is they should've asked the OpenSolaris community re what compiler is the de-facto used one. They would have been told to use SunStudio Express, with -m64 -fast C/CXX flags, which would optimize the code for the processor used and is what most OpenSolaris users use. But no, it's so much more exciting to say "boohoo, ancient gcc on OpenSolaris doesn't support out latest AMD processor", well, doh...
          One can run this "Phoronix Suite" thing on one's own, right? I think I'm going to install OpenSolaris in a VM and do just what you said.

          Comment


          • #35
            I remember the old benchmark here, OpenSolaris vs Linux vs FreeBSD, where the OpenSolaris binaries were compiled to 32 bits binaries. Someone here, compiled to 64 bits OpenSolaris binaries, and his OpenSolaris benches doubled in performance! So the OpenSolaris benches should have been doubled, easily winning. That was quite an dubious benchmark (Linux used 64bits binaries and OpenSolaris 32bits binaries).





            For this benchmark, I wonder how Linux would fare if you tried to benchmark GCC 4.x vs GCC 3.x? Is it only me who thinks benches with GCC 3.x would loose big? This benchmark is about older versions of a compiler vs newer versions. Not really optimal benchmark.

            How in earth do you come up with the great idea to say that Linux is faster, when OpenSolaris are using 32bits binaries vs Linux 64bit binaries, and also old GCC vs a new GCC?

            I remember a discussion in that thread. Someone posted "evidence" that Linux was way faster than Solaris. The article he was refering to, migrated a 800MHz SPARC to a 2.8GHz dual CPU Xeon Linux - or something similar. The article showed the 800MHz SPARC benches vs 2.8GHz Linux benches and concluded that Linux is faster. Now, that is not really fair?

            I would really like to see some fair benches sometime. 64 bit binaries and the same compiler version on all OSes. But maybe that is just a dream.

            I could do some benches my own on my blogg, old Linux v2.4 and old GCC versions and 32 bits binaries and 800MHz CPU vs newest Solaris version, newest compiler version, 64bits binaries and Intel Core i7 CPU. I think Linux camp would scream out loud and call me dumb and incompetent. But when Phoronix does the same thing, that is ok and fair. Eh?





            It is like someone here, stated that Linux runs on SGI machines with 512 CPUs - but he didnt write that the SGI Linux kernel is specially tailored and modified. It is not a normal Linux kernel, which doesnt scale well. To me, scalability is when you use the very same kernel and same installation DVD from small computers up to big monster Enterprise computers with lots of CPUs and lots of threads (Solaris). If you have to modify the kernel, then it is not scalable.

            Comment


            • #36
              Will you be running Linux in a VM as well ?

              Originally posted by flice View Post
              One can run this "Phoronix Suite" thing on one's own, right? I think I'm going to install OpenSolaris in a VM and do just what you said.

              Comment


              • #37
                Well it was never mentioned which java was used. Btw there is sun-java6-plugin now for amd64 too in Ubuntu multiverse or Debian sid. No need for other implementations since Java 6u12.

                Comment


                • #38
                  @kebabbert

                  This benchmark is probably one of few I found to be fair:



                  Notice how old are Linux kernels in this test.

                  But when Phoronix does the same thing, that is ok and fair. Eh?
                  I agree with you. They did same thing with Ubuntu vs Macos benchmark.

                  It is like someone here, stated that Linux runs on SGI machines with 512 CPUs - but he didnt write that the SGI Linux kernel is specially tailored and modified. It is not a normal Linux kernel, which doesnt scale well. To me, scalability is when you use the very same kernel and same installation DVD from small computers up to big monster Enterprise computers with lots of CPUs and lots of threads (Solaris). If you have to modify the kernel, then it is not scalable.
                  What stops them from modyfing Solaris kernel? Show me that set up Linux kernel doesn't scale well. If you get big performance improvement after modifying the kernel it's rather stupid to use the same kernel.
                  Last edited by kraftman; 09 February 2009, 05:53 PM.

                  Comment


                  • #39
                    Originally posted by etacarinae View Post
                    Will you be running Linux in a VM as well ?
                    Sure, I will (if I find the time to do it).

                    Comment


                    • #40
                      Originally posted by kraftman View Post
                      @kebabbert

                      This benchmark is probably one of few I found to be fair:



                      Notice how old are Linux kernels in this test.
                      I dont know much about Linux, so I dont know how old the kernels are. But this test seems better. I will have to read it later. Thank you. I hope they didnt do anything like in the Phoronix benches: 32bits binaries against 64bits, use old version of software vs new versions, etc. I will see if they are doing something similar. As someone showed, 64 bits binaries Opensolaris doubled the numbers and Opensolaris should have won easily.





                      Originally posted by kraftman View Post
                      What stops them from modyfing Solaris kernel? Show me that set up Linux kernel doesn't scale well. If you get big performance improvement after modifying the kernel it's rather stupid to use the same kernel.
                      But if you change and modify the Linux kernel, I wouldnt state that Linux scales well. As I said, to me scalability is when you use the same installation DVD from small computers up to big Iron. If you have lots of different Linux kernels depending on the machine and number of CPUs, then it is not scalable. Then it is modifiable.




                      About Solaris scaling:


                      "These types of techniques allow the Solaris kernel to scale to thousands of threads, up to 1 million I/Os per second, and several hundred physical processors.
                      ...
                      Within the next five years, expect to see CMP hardware scaling to as many as 512 processor threads per system, pushing the requirements of operating system scaling past the extreme end of that realized today."




                      About linux scaling:


                      "you don?t currently see the level of geometric performance increases on Linux above 16 cores like you do with UNIX. The maturity in the Linux kernel for this level of enterprise performance and stability on this type of hardware just isn?t there yet."

                      About Linux as a file server:
                      I am frequently asked by potential customers with high I/O requirements if they can use Linux instead of AIX or Solaris.No one ever asks me about

                      Comment

                      Working...
                      X