Announcement

Collapse
No announcement yet.

Benchmarking Various Linux Distributions With Amazon's EC2 Cloud In 2017

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Benchmarking Various Linux Distributions With Amazon's EC2 Cloud In 2017

    Phoronix: Benchmarking Various Linux Distributions With Amazon's EC2 Cloud In 2017

    After carrying out the recent Amazon EC2 Cloud benchmarks vs. Intel/AMD CPUs I also decided to run some Linux distribution tests in the Elastic Compute Cloud with not having done any such comparisons in a long time. So for those wondering how different Linux distributions compare in Amazon's cloud, this article is for you.

    http://www.phoronix.com/vr.php?view=24564

  • #2
    I'm curious, why was CentOS 7 not chosen? I'm not sure the difference between 6 and 7. We run 7 and I'd be curious to see how it compares.

    Comment


    • #3
      How much this test cost running on AWS? IMHO EC2 instances are too expensive if you are using it 24/7, with the same monthly price you are better renting a physical machine or buying your own

      Comment


      • #4
        The results fluctuate much less compared to when comparing Linux distributions / operating systems on bare metal hardware where the kernel has complete control, etc.
        In the past I found EC2 benchmarks way too inconsistent. Every time you launch a new instance you have different results.

        Comment


        • #5
          A good reason for Mick to try a disto or two every month on Amazon, then, hey?

          Interesting results. Keen to see them done again at some stage.

          Comment


          • #6
            Originally posted by andrei_me View Post
            How much this test cost running on AWS? IMHO EC2 instances are too expensive if you are using it 24/7, with the same monthly price you are better renting a physical machine or buying your own
            In my opinion, the future is for almost every programmer/developer to own one or more remote machines (baremetal or virtualized).

            Comment


            • #7
              Originally posted by atomsymbol View Post
              In my opinion, the future is for almost every programmer/developer to own one or more remote machines (baremetal or virtualized).
              Or to have his own rig for compilation tucked somewhere in a closet.

              Comment


              • #8
                Originally posted by starshipeleven View Post
                Or to have his own rig for compilation tucked somewhere in a closet.
                Just a note: In my opinion, C/C++ compiler performance is for the most part unrelated to cloud computing. It is related to C/C++ compilers not reusing information from previous runs (i.e: common compiler algorithms are based on using "x" rather than "delta(x)"). Makefiles and build tools are good at avoiding recompilations at file granularity. Avoiding recompilations at the sub-file level can only be performed within the compiler - and most compilers aren't performing it. Even Python, a programming language where transparent memoization could be a standard feature of the Python implementation, doesn't implement transparent memoization. Ubiquitous incremental compilation of C/C++ codes is several decades away in the future because of the prevalent mode of thinking about reuse of information in programming today. Zapcc is a nice start, however zapcc&clang are currently lacking some GCC-specific features thus preventing Zapcc from replacing GCC.

                Comment

                Working...
                X