Announcement

Collapse
No announcement yet.

Docker Benchmarks: Ubuntu, Clear Linux, CentOS, Debian & Alpine

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by planetguy View Post

    Clear Linux targets Haswell+ Intel CPUs exclusively, so they can choose instructions that are fast on that hardware and absent elsewhere. Kubuntu is a general-purpose distribution, so it can't do this.
    This is not correct; Clear Linux runs on much older than Haswell as well, we don't use AVX by default. We do of course add runtime detection and on systems with AVX2 we will use AVX2 for heavy math stuffs.

    Comment


    • #12
      Originally posted by planetguy View Post

      Clear Linux targets Haswell+ Intel CPUs exclusively, so they can choose instructions that are fast on that hardware and absent elsewhere. Kubuntu is a general-purpose distribution, so it can't do this.

      Clear Linux also does profile-guided optimization - watching a running program to see where it spends its time. Profile-guided optimization would probably help Kubuntu, but again, different CPUs are faster at different tasks, so which CPU should it optimize for?
      I would think each distribution should optimize for any CPU based on which it is installed for.

      Comment


      • #13
        Originally posted by Goddard View Post

        I would think each distribution should optimize for any CPU based on which it is installed for.
        Its not practical to do that. First its necessary to recompile the code for every architecture and some instances, arguably, for every cpu model. Features like multi-versioning could be used to reduce the need of recompilation. However this feature increases the object/executable file sizes.

        Comment


        • #14
          Originally posted by arjan_intel View Post

          This is not correct; Clear Linux runs on much older than Haswell as well, we don't use AVX by default. We do of course add runtime detection and on systems with AVX2 we will use AVX2 for heavy math stuffs.
          My mistake. I thought I read that it was only supported on HSW+, and when I tried it on my Core 2 Duo system it failed.

          Comment


          • #15
            Originally posted by Goddard View Post
            I would think each distribution should optimize for any CPU based on which it is installed for.
            You might want to have a look at Gentoo, as it is the distro that does that by default. It basically recompiles from sources each time it needs to update something. Its userbase is very smug as a result.

            For most users it's not really practical though.

            Comment


            • #16
              Originally posted by Goddard View Post

              I would think each distribution should optimize for any CPU based on which it is installed for.
              If you want that, get Gentoo, and set it up for your particular CPU. I've done it, and it works splendidly, but it takes its time to compile, of course.

              Comment


              • #17
                Michael
                Few bits about storage performance in Docker. I didn't find in article what Docker storage backend you were using and other guys says that you use default parameters for tested apps.
                Docker has several storage backends, for example Ubuntu 16.10 uses AUFS driver, CentOS 7 uses devicemapper-loopback, and both these options suck in terms of write performance.
                See https://docs.docker.com/engine/userg...er-performance
                So any write intensive test, like Compile Bench, will be slow if it writes data to container layer.
                But in any production use case for containers you'll not write to container storage and will you volumes for permanent data storage. And Docker volumes on standalone machine should have very little performance impact on disk reads and writes.

                Please, next time you'll be testing container, mount a volumes to locations, where tested apps will write. Without that you'll be testing how AUFS sucks, but not a distribution inside container.

                Comment


                • #18
                  Originally posted by defaultUser View Post

                  Its not practical to do that. First its necessary to recompile the code for every architecture and some instances, arguably, for every cpu model. Features like multi-versioning could be used to reduce the need of recompilation. However this feature increases the object/executable file sizes.
                  I understand not doing it for EVERYTHING, but intel and amd are the major platforms and AMD hasn't released anything new for awhile and Intel doesn't release CPUs that often. Seems like some build server in the back corner some where could manage this no?

                  Comment


                  • #19
                    Originally posted by defaultUser View Post

                    Its not practical to do that. First its necessary to recompile the code for every architecture and some instances, arguably, for every cpu model. Features like multi-versioning could be used to reduce the need of recompilation. However this feature increases the object/executable file sizes.
                    Actually Windows used to do exactly that. I don't know if it still does it, but I see no reason not to.
                    It is not hard to have the kernel compiled for every architecture that's widely used (e.g. intel's Core, AMD's Bulldozer and upcoming Zen) and install the one you actually need. Unfortunately, that doesn't work well with installers or update tools in Linux. And of course, there's not much of an incentive to do that, since you can compile the kernel yourself with any optimizations you want. Or at least after a crash course into kernel's compilation and available optimizations you can.

                    Comment


                    • #20
                      Originally posted by Goddard View Post
                      I understand not doing it for EVERYTHING, but intel and amd are the major platforms and AMD hasn't released anything new for awhile and Intel doesn't release CPUs that often. Seems like some build server in the back corner some where could manage this no?
                      Note that you would need to split distros again in hard categories like 32bit/64bit, (just with "x86 cpus 2010-2012", "x86 cpus 2013-2014", "x86 cpus 2015-2016" and so on, for example) as every few years they add new instructions and you can't run binaries that expect some instructions to work on hardware that does not support them. This means more annoyances for users, duplication in build servers, and so on. I personally think it is doable, but I doubt that for most usecases it is going to be worth it.

                      Most half-serious distros offer source packages, you can easily download them, change the cflags to reflect what is your system and recompile, if you really need it.

                      Clear Linux devs have implemented some trickery that detects the type of system and switches binaries for some applications, but I doubt this would not be a pain in the ass if done on a decent scale. Debian has like 60k packages, even if 80% is obsolete shovelware that does not need this, it's still a ton of work, and debugging issues will get so much more fun afterwards.

                      Comment

                      Working...
                      X